Test Report: Docker_Linux 21594

                    
                      532dacb4acf31553658ff6b0bf62fcf9309f2277:2025-09-19:41507
                    
                

Test fail (18/334)

x
+
TestMultiControlPlane/serial/StartCluster (324.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0919 22:25:08.951663  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:25.092055  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:52.796157  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.466420  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.472796  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.484145  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.505599  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.547020  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.628482  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:33.790028  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:34.111699  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:34.753553  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:36.035653  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:38.597484  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:43.719734  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:28:53.961414  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:29:14.443747  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: exit status 80 (5m22.640693487s)

                                                
                                                
-- stdout --
	* [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-434755 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache          │ functional-432755 cache reload                                                                                    │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ cache          │ functional-432755 cache add minikube-local-cache-test:functional-432755                                           │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ cache          │ functional-432755 cache delete minikube-local-cache-test:functional-432755                                        │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ ssh            │ functional-432755 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                           │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                               │ minikube          │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ cache          │ list                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 19 Sep 25 22:22 UTC │ 19 Sep 25 22:22 UTC │
	│ service        │ functional-432755 service --namespace=default --https --url hello-node                                            │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ mount          │ -p functional-432755 --kill=true                                                                                  │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │                     │
	│ license        │                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ service        │ functional-432755 service hello-node --url --format={{.IP}}                                                       │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ update-context │ functional-432755 update-context --alsologtostderr -v=2                                                           │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ update-context │ functional-432755 update-context --alsologtostderr -v=2                                                           │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ update-context │ functional-432755 update-context --alsologtostderr -v=2                                                           │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image          │ functional-432755 image ls --format short --alsologtostderr                                                       │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ service        │ functional-432755 service hello-node --url                                                                        │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image          │ functional-432755 image ls --format yaml --alsologtostderr                                                        │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh            │ functional-432755 ssh pgrep buildkitd                                                                             │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │                     │
	│ image          │ functional-432755 image ls --format json --alsologtostderr                                                        │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image          │ functional-432755 image build -t localhost/my-image:functional-432755 testdata/build --alsologtostderr            │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image          │ functional-432755 image ls --format table --alsologtostderr                                                       │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image          │ functional-432755 image ls                                                                                        │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ delete         │ -p functional-432755                                                                                              │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ start          │ ha-434755 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41bb0b28153e190e783092cfcd3e860459231dd55e7746d59828a10d315188f9/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37e3f52bd7982       6e38f40d628db                                                                                       4 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                       4 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                       4 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                       4 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                       4 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a            4 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                       4 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                       4 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c   5 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                       5 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                       5 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                       5 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                       5 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:29:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:28:28 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:28:28 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:28:28 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:28:28 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m57s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m57s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m57s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m55s                kube-proxy       
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m8s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m8s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m8s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m                   kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m                   kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m                   kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m58s                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           4m29s                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:29:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:25:37 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:25:37 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:25:37 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:25:37 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m27s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m27s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m14s  kube-proxy       
	  Normal  RegisteredNode  4m24s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  4m23s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  4m7s   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:29:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:26:09 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:26:09 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:26:09 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:26:09 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m1s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m6s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4m4s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  4m3s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  4m2s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.268113Z","caller":"etcdserver/server.go:1838","msg":"sending merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.268302Z","caller":"rafthttp/snapshot_sender.go:82","msg":"sending database snapshot","snapshot-index":723,"remote-peer-id":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.272009Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1466368,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.274638Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:25:32.274740Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.276836Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:25:32.276872Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.284009Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":723,"remote-peer-id":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"warn","ts":"2025-09-19T22:25:32.294689Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:25:32.294789Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:25:32.314771Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:25:32.314816Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.314829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315431Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:25:32.315457Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315465Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.351210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.354520Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6956058400243883992 12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	
	
	==> kernel <==
	 22:29:44 up  1:12,  0 users,  load average: 0.44, 4.23, 27.87
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:29:03.800106       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:29:13.792481       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:29:13.792539       1 main.go:301] handling current node
	I0919 22:29:13.792556       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:29:13.792561       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:29:13.792859       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:29:13.792873       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:29:23.796613       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:29:23.796653       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:29:23.797221       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:29:23.797257       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:29:23.797450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:29:23.797464       1 main.go:301] handling current node
	I0919 22:29:33.799595       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:29:33.799631       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:29:33.799839       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:29:33.799852       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:29:33.799961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:29:33.799975       1 main.go:301] handling current node
	I0919 22:29:43.800602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:29:43.800641       1 main.go:301] handling current node
	I0919 22:29:43.800661       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:29:43.800668       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:29:43.800873       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:29:43.800890       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:40.696152       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0919 22:24:40.699966       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 22:24:40.699987       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 22:24:41.126661       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 22:24:41.164479       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 22:24:41.300535       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 22:24:41.306999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0919 22:24:41.308248       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 22:24:41.312358       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 22:24:41.730293       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:24:44.451829       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:24:44.460659       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 22:24:44.467080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:24:47.036591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.041406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.734451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:39.747690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:24:39.747769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:24:39.747766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867473    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-proxy\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867488    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-xtables-lock\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (324.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (94.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 kubectl -- rollout status deployment/busybox: (3.902894371s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:29:49.845189  146335 retry.go:31] will retry after 1.072509409s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:29:51.033520  146335 retry.go:31] will retry after 1.441905534s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:29:52.595398  146335 retry.go:31] will retry after 1.701897981s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:29:54.416157  146335 retry.go:31] will retry after 3.815248928s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
E0919 22:29:55.405310  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:29:58.349595  146335 retry.go:31] will retry after 5.76712265s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:30:04.232308  146335 retry.go:31] will retry after 5.556087315s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:30:09.908780  146335 retry.go:31] will retry after 10.113381259s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:30:20.142313  146335 retry.go:31] will retry after 22.162140097s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0919 22:30:42.425703  146335 retry.go:31] will retry after 34.307029107s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.io: exit status 1 (163.120248ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7b57f96db7-c67nh could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.io
E0919 22:31:17.327555  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default: exit status 1 (162.779713ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7b57f96db7-c67nh could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (163.487499ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7b57f96db7-c67nh could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-432755 image ls                                                                                        │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ delete  │ -p functional-432755                                                                                              │ functional-432755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ start   │ ha-434755 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                  │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- rollout status deployment/busybox                                                            │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                             │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.io                                      │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.io                                      │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.io                                      │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default                                 │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default                                 │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default                                 │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default.svc.cluster.local               │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default.svc.cluster.local               │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default.svc.cluster.local               │ ha-434755         │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         6 minutes ago        Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         6 minutes ago        Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         6 minutes ago        Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         6 minutes ago        Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         6 minutes ago        Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              6 minutes ago        Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         6 minutes ago        Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     6 minutes ago        Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         6 minutes ago        Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         6 minutes ago        Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         6 minutes ago        Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         6 minutes ago        Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m32s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m32s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m35s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m32s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m43s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m43s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x7 over 6m43s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m33s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m4s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           5m42s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m2s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m49s  kube-proxy       
	  Normal  RegisteredNode  5m59s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  5m58s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  5m42s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m36s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m41s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  5m39s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  5m38s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  5m37s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.268113Z","caller":"etcdserver/server.go:1838","msg":"sending merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.268302Z","caller":"rafthttp/snapshot_sender.go:82","msg":"sending database snapshot","snapshot-index":723,"remote-peer-id":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.272009Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1466368,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.274638Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:25:32.274740Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.276836Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:25:32.276872Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.284009Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":723,"remote-peer-id":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"warn","ts":"2025-09-19T22:25:32.294689Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:25:32.294789Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:25:32.314771Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:25:32.314816Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.314829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315431Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:25:32.315457Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315465Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.351210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.354520Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6956058400243883992 12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	
	
	==> kernel <==
	 22:31:19 up  1:13,  0 users,  load average: 0.35, 3.21, 25.21
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:30:33.800799       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:30:43.801030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:30:43.801063       1 main.go:301] handling current node
	I0919 22:30:43.801079       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:30:43.801085       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:30:43.801392       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:30:43.801417       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:30:53.792599       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:30:53.792637       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:30:53.792846       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:30:53.792862       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:30:53.792998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:30:53.793012       1 main.go:301] handling current node
	I0919 22:31:03.791633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:03.791684       1 main.go:301] handling current node
	I0919 22:31:03.791704       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:03.791709       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:03.791894       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:03.791909       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:13.794575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:13.794625       1 main.go:301] handling current node
	I0919 22:31:13.794642       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:13.794647       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:13.794848       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:13.794863       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:44.467080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:24:47.036591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.041406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.734451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (94.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:214: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.152945993s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ ha-434755 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- rollout status deployment/busybox                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         6 minutes ago        Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         6 minutes ago        Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         6 minutes ago        Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         6 minutes ago        Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         6 minutes ago        Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              6 minutes ago        Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         6 minutes ago        Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     6 minutes ago        Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         6 minutes ago        Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         6 minutes ago        Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         6 minutes ago        Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         6 minutes ago        Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m34s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m34s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m37s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m34s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m44s (x8 over 6m45s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x8 over 6m45s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x7 over 6m45s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m37s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m35s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           5m44s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m4s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m4s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m51s  kube-proxy       
	  Normal  RegisteredNode  6m1s   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m     node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  5m44s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m38s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m43s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  5m41s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  5m40s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  5m39s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.268113Z","caller":"etcdserver/server.go:1838","msg":"sending merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.268302Z","caller":"rafthttp/snapshot_sender.go:82","msg":"sending database snapshot","snapshot-index":723,"remote-peer-id":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.272009Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":1466368,"size":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:25:32.274638Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:25:32.274740Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.276836Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:25:32.276872Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.284009Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":723,"remote-peer-id":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB"}
	{"level":"warn","ts":"2025-09-19T22:25:32.294689Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:25:32.294789Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:25:32.314771Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:25:32.314816Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.314829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315431Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:25:32.315457Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315465Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.351210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.354520Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6956058400243883992 12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	
	
	==> kernel <==
	 22:31:21 up  1:13,  0 users,  load average: 0.48, 3.19, 25.08
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:30:33.800799       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:30:43.801030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:30:43.801063       1 main.go:301] handling current node
	I0919 22:30:43.801079       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:30:43.801085       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:30:43.801392       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:30:43.801417       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:30:53.792599       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:30:53.792637       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:30:53.792846       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:30:53.792862       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:30:53.792998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:30:53.793012       1 main.go:301] handling current node
	I0919 22:31:03.791633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:03.791684       1 main.go:301] handling current node
	I0919 22:31:03.791704       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:03.791709       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:03.791894       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:03.791909       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:13.794575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:13.794625       1 main.go:301] handling current node
	I0919 22:31:13.794642       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:13.794647       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:13.794848       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:13.794863       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:47.036591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.041406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.734451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (2.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 node add --alsologtostderr -v 5: exit status 80 (27.734485712s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-434755 as [worker]
	* Starting "ha-434755-m04" worker node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Stopping node "ha-434755-m04"  ...
	* Deleting "ha-434755-m04" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:31:22.719100  223157 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:31:22.719407  223157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:31:22.719419  223157 out.go:374] Setting ErrFile to fd 2...
	I0919 22:31:22.719426  223157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:31:22.719762  223157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:31:22.720195  223157 mustload.go:65] Loading cluster: ha-434755
	I0919 22:31:22.720723  223157 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:31:22.721232  223157 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:31:22.741903  223157 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:31:22.742231  223157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:31:22.801367  223157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:31:22.791656459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:31:22.801758  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:31:22.820278  223157 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:31:22.820740  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:31:22.837452  223157 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:31:22.837773  223157 api_server.go:166] Checking apiserver status ...
	I0919 22:31:22.837837  223157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:31:22.837900  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:31:22.855642  223157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:31:22.959841  223157 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:31:22.969696  223157 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:31:22.969754  223157 ssh_runner.go:195] Run: ls
	I0919 22:31:22.973456  223157 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:31:22.978645  223157 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:31:22.980263  223157 out.go:179] * Adding node m04 to cluster ha-434755 as [worker]
	I0919 22:31:22.981588  223157 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:31:22.981716  223157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:31:22.983578  223157 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:31:22.984868  223157 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:31:22.985938  223157 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:31:22.986914  223157 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:31:22.986955  223157 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:31:22.986968  223157 cache.go:58] Caching tarball of preloaded images
	I0919 22:31:22.986978  223157 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:31:22.987071  223157 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:31:22.987089  223157 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:31:22.987236  223157 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:31:23.007332  223157 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:31:23.007349  223157 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:31:23.007364  223157 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:31:23.007392  223157 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:31:23.007536  223157 start.go:364] duration metric: took 100.409µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:31:23.007562  223157 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 22:31:23.007690  223157 start.go:125] createHost starting for "m04" (driver="docker")
	I0919 22:31:23.009351  223157 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:31:23.009476  223157 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:31:23.009534  223157 client.go:168] LocalClient.Create starting
	I0919 22:31:23.009609  223157 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:31:23.009639  223157 main.go:141] libmachine: Decoding PEM data...
	I0919 22:31:23.009653  223157 main.go:141] libmachine: Parsing certificate...
	I0919 22:31:23.009711  223157 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:31:23.009729  223157 main.go:141] libmachine: Decoding PEM data...
	I0919 22:31:23.009737  223157 main.go:141] libmachine: Parsing certificate...
	I0919 22:31:23.009920  223157 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:31:23.026574  223157 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0014186f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:31:23.026628  223157 kic.go:121] calculated static IP "192.168.49.5" for the "ha-434755-m04" container
	I0919 22:31:23.026687  223157 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:31:23.044002  223157 cli_runner.go:164] Run: docker volume create ha-434755-m04 --label name.minikube.sigs.k8s.io=ha-434755-m04 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:31:23.061139  223157 oci.go:103] Successfully created a docker volume ha-434755-m04
	I0919 22:31:23.061201  223157 cli_runner.go:164] Run: docker run --rm --name ha-434755-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m04 --entrypoint /usr/bin/test -v ha-434755-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:31:23.432010  223157 oci.go:107] Successfully prepared a docker volume ha-434755-m04
	I0919 22:31:23.432054  223157 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:31:23.432076  223157 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:31:23.432126  223157 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:31:27.195749  223157 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.763559722s)
	I0919 22:31:27.195782  223157 kic.go:203] duration metric: took 3.763702262s to extract preloaded images to volume ...
	W0919 22:31:27.195878  223157 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:31:27.195913  223157 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:31:27.195949  223157 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:31:27.253130  223157 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m04 --name ha-434755-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m04 --network ha-434755 --ip 192.168.49.5 --volume ha-434755-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:31:27.522409  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Running}}
	I0919 22:31:27.541001  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:27.560715  223157 cli_runner.go:164] Run: docker exec ha-434755-m04 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:31:27.616268  223157 oci.go:144] the created container "ha-434755-m04" has a running status.
	I0919 22:31:27.616303  223157 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa...
	I0919 22:31:27.773952  223157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:31:27.774007  223157 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:31:27.993205  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:28.011454  223157 cli_runner.go:164] Run: docker inspect ha-434755-m04
	I0919 22:31:28.028123  223157 errors.go:84] Postmortem inspect ("docker inspect ha-434755-m04"): -- stdout --
	[
	    {
	        "Id": "f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d",
	        "Created": "2025-09-19T22:31:27.268173524Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:31:27.30003107Z",
	            "FinishedAt": "2025-09-19T22:31:27.649644393Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d/hosts",
	        "LogPath": "/var/lib/docker/containers/f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d/f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d-json.log",
	        "Name": "/ha-434755-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755-m04:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d",
	                "LowerDir": "/var/lib/docker/overlay2/3d63599f122e27f8ae7523a54644348a953226f1ad34a9aa53b854986d1b64a5-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d63599f122e27f8ae7523a54644348a953226f1ad34a9aa53b854986d1b64a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d63599f122e27f8ae7523a54644348a953226f1ad34a9aa53b854986d1b64a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d63599f122e27f8ae7523a54644348a953226f1ad34a9aa53b854986d1b64a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755-m04",
	                "Source": "/var/lib/docker/volumes/ha-434755-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755-m04",
	                "name.minikube.sigs.k8s.io": "ha-434755-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755-m04",
	                        "f6f027606b09"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0919 22:31:28.028218  223157 cli_runner.go:164] Run: docker logs --timestamps --details ha-434755-m04
	I0919 22:31:28.048236  223157 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-434755-m04"): -- stdout --
	2025-09-19T22:31:27.516123006Z  + userns=
	2025-09-19T22:31:27.516153019Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-19T22:31:27.519270626Z  + validate_userns
	2025-09-19T22:31:27.519285626Z  + [[ -z '' ]]
	2025-09-19T22:31:27.519288934Z  + return
	2025-09-19T22:31:27.519291481Z  + configure_containerd
	2025-09-19T22:31:27.519319914Z  + local snapshotter=
	2025-09-19T22:31:27.519334668Z  + [[ -n '' ]]
	2025-09-19T22:31:27.519337768Z  + [[ -z '' ]]
	2025-09-19T22:31:27.519847230Z  ++ stat -f -c %T /kind
	2025-09-19T22:31:27.521033335Z  + container_filesystem=overlayfs
	2025-09-19T22:31:27.521048756Z  + [[ overlayfs == \z\f\s ]]
	2025-09-19T22:31:27.521052082Z  + [[ -n '' ]]
	2025-09-19T22:31:27.521054554Z  + configure_proxy
	2025-09-19T22:31:27.521057580Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-19T22:31:27.524466119Z  + [[ ! -z '' ]]
	2025-09-19T22:31:27.524480452Z  + cat
	2025-09-19T22:31:27.525630806Z  + fix_mount
	2025-09-19T22:31:27.525646168Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-19T22:31:27.525649262Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-19T22:31:27.526099522Z  ++ which mount
	2025-09-19T22:31:27.527534784Z  ++ which umount
	2025-09-19T22:31:27.528362739Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-19T22:31:27.533830494Z  ++ which mount
	2025-09-19T22:31:27.535126115Z  ++ which umount
	2025-09-19T22:31:27.536088836Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-19T22:31:27.537710829Z  +++ which mount
	2025-09-19T22:31:27.538572459Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-19T22:31:27.539630825Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-19T22:31:27.539643282Z  + echo 'INFO: remounting /sys read-only'
	2025-09-19T22:31:27.539646697Z  INFO: remounting /sys read-only
	2025-09-19T22:31:27.539649319Z  + mount -o remount,ro /sys
	2025-09-19T22:31:27.541614375Z  + echo 'INFO: making mounts shared'
	2025-09-19T22:31:27.541628515Z  INFO: making mounts shared
	2025-09-19T22:31:27.541632211Z  + mount --make-rshared /
	2025-09-19T22:31:27.543316247Z  + retryable_fix_cgroup
	2025-09-19T22:31:27.543694503Z  ++ seq 0 10
	2025-09-19T22:31:27.544482269Z  + for i in $(seq 0 10)
	2025-09-19T22:31:27.544489000Z  + fix_cgroup
	2025-09-19T22:31:27.544584921Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-19T22:31:27.544599728Z  + echo 'INFO: detected cgroup v2'
	2025-09-19T22:31:27.544603922Z  INFO: detected cgroup v2
	2025-09-19T22:31:27.544617401Z  + return
	2025-09-19T22:31:27.544621462Z  + return
	2025-09-19T22:31:27.544670718Z  + fix_machine_id
	2025-09-19T22:31:27.544679226Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-19T22:31:27.544682278Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-19T22:31:27.544685024Z  + rm -f /etc/machine-id
	2025-09-19T22:31:27.545718188Z  + systemd-machine-id-setup
	2025-09-19T22:31:27.549020122Z  Initializing machine ID from random generator.
	2025-09-19T22:31:27.551070956Z  + fix_product_name
	2025-09-19T22:31:27.551083645Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-19T22:31:27.551086587Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-19T22:31:27.551089527Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-19T22:31:27.551095154Z  + echo kind
	2025-09-19T22:31:27.552115749Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-19T22:31:27.553534916Z  + fix_product_uuid
	2025-09-19T22:31:27.553548012Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-19T22:31:27.553551392Z  + cat /proc/sys/kernel/random/uuid
	2025-09-19T22:31:27.554656096Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-19T22:31:27.554671783Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-19T22:31:27.554674670Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-19T22:31:27.554676955Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-19T22:31:27.556349390Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-19T22:31:27.556364178Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-19T22:31:27.556367602Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-19T22:31:27.556370623Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-19T22:31:27.558138742Z  + select_iptables
	2025-09-19T22:31:27.558153254Z  + local mode num_legacy_lines num_nft_lines
	2025-09-19T22:31:27.559376848Z  ++ grep -c '^-'
	2025-09-19T22:31:27.562224200Z  ++ true
	2025-09-19T22:31:27.562446754Z  + num_legacy_lines=0
	2025-09-19T22:31:27.563381556Z  ++ grep -c '^-'
	2025-09-19T22:31:27.568717041Z  + num_nft_lines=6
	2025-09-19T22:31:27.568729245Z  + '[' 0 -ge 6 ']'
	2025-09-19T22:31:27.568731595Z  + mode=nft
	2025-09-19T22:31:27.568733372Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-19T22:31:27.568735302Z  INFO: setting iptables to detected mode: nft
	2025-09-19T22:31:27.568750246Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:31:27.568808984Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:31:27.568822180Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:31:27.569217586Z  ++ seq 0 15
	2025-09-19T22:31:27.570008745Z  + for i in $(seq 0 15)
	2025-09-19T22:31:27.570036682Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:31:27.572888858Z  + return
	2025-09-19T22:31:27.572904592Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:31:27.572947123Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:31:27.572952052Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:31:27.573357028Z  ++ seq 0 15
	2025-09-19T22:31:27.574091217Z  + for i in $(seq 0 15)
	2025-09-19T22:31:27.574104316Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:31:27.576579355Z  + return
	2025-09-19T22:31:27.576595191Z  + enable_network_magic
	2025-09-19T22:31:27.576636298Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-19T22:31:27.576646850Z  + local docker_host_ip
	2025-09-19T22:31:27.577807663Z  ++ cut '-d ' -f1
	2025-09-19T22:31:27.577915496Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:31:27.578048679Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-19T22:31:27.610168382Z  + docker_host_ip=
	2025-09-19T22:31:27.610209453Z  + [[ -z '' ]]
	2025-09-19T22:31:27.611167503Z  ++ ip -4 route show default
	2025-09-19T22:31:27.611186377Z  ++ cut '-d ' -f3
	2025-09-19T22:31:27.613311168Z  + docker_host_ip=192.168.49.1
	2025-09-19T22:31:27.613330404Z  + iptables-save
	2025-09-19T22:31:27.614003472Z  + iptables-restore
	2025-09-19T22:31:27.616582734Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-19T22:31:27.624935849Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-19T22:31:27.626710575Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-19T22:31:27.627952263Z  + replaced='# Generated by Docker Engine.
	2025-09-19T22:31:27.627965021Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:31:27.627968531Z  # has been modified.
	2025-09-19T22:31:27.627971180Z  
	2025-09-19T22:31:27.627974009Z  nameserver 192.168.49.1
	2025-09-19T22:31:27.627976692Z  search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:31:27.627979361Z  options edns0 trust-ad ndots:0
	2025-09-19T22:31:27.627990279Z  
	2025-09-19T22:31:27.627992811Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:31:27.627995194Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:31:27.627997562Z  # Overrides: []
	2025-09-19T22:31:27.627999751Z  # Option ndots from: internal'
	2025-09-19T22:31:27.628002040Z  + [[ '' == '' ]]
	2025-09-19T22:31:27.628004445Z  + echo '# Generated by Docker Engine.
	2025-09-19T22:31:27.628006851Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:31:27.628009455Z  # has been modified.
	2025-09-19T22:31:27.628012329Z  
	2025-09-19T22:31:27.628014685Z  nameserver 192.168.49.1
	2025-09-19T22:31:27.628017390Z  search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:31:27.628020611Z  options edns0 trust-ad ndots:0
	2025-09-19T22:31:27.628023318Z  
	2025-09-19T22:31:27.628025791Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:31:27.628028818Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:31:27.628031685Z  # Overrides: []
	2025-09-19T22:31:27.628034083Z  # Option ndots from: internal'
	2025-09-19T22:31:27.628194858Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-19T22:31:27.628215981Z  + local files_to_update
	2025-09-19T22:31:27.628219260Z  + local should_fix_certificate=false
	2025-09-19T22:31:27.629429337Z  ++ cut '-d ' -f1
	2025-09-19T22:31:27.629632825Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:31:27.630113834Z  ++++ hostname
	2025-09-19T22:31:27.630877836Z  +++ timeout 5 getent ahostsv4 ha-434755-m04
	2025-09-19T22:31:27.633571451Z  + curr_ipv4=192.168.49.5
	2025-09-19T22:31:27.633585544Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-19T22:31:27.633589294Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-19T22:31:27.633592081Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-19T22:31:27.633638186Z  + [[ -n 192.168.49.5 ]]
	2025-09-19T22:31:27.633646904Z  + echo -n 192.168.49.5
	2025-09-19T22:31:27.634766737Z  ++ cut '-d ' -f1
	2025-09-19T22:31:27.634858625Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:31:27.635393837Z  ++++ hostname
	2025-09-19T22:31:27.636187140Z  +++ timeout 5 getent ahostsv6 ha-434755-m04
	2025-09-19T22:31:27.638559663Z  + curr_ipv6=
	2025-09-19T22:31:27.638570328Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-19T22:31:27.638579136Z  INFO: Detected IPv6 address: 
	2025-09-19T22:31:27.638581076Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-19T22:31:27.638582680Z  + [[ -n '' ]]
	2025-09-19T22:31:27.638584342Z  + false
	2025-09-19T22:31:27.638993783Z  ++ uname -a
	2025-09-19T22:31:27.639803387Z  + echo 'entrypoint completed: Linux ha-434755-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-19T22:31:27.639816811Z  entrypoint completed: Linux ha-434755-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-19T22:31:27.639819511Z  + exec /sbin/init
	2025-09-19T22:31:27.646684402Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-19T22:31:27.646696263Z  Detected virtualization docker.
	2025-09-19T22:31:27.646698800Z  Detected architecture x86-64.
	2025-09-19T22:31:27.646798943Z  
	2025-09-19T22:31:27.646803795Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-19T22:31:27.646807345Z  
	2025-09-19T22:31:27.647189238Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:27.647237605Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:27.647246440Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:27.647248774Z  Exiting PID 1...
	
	-- /stdout --
	I0919 22:31:28.048329  223157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:31:28.105329  223157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:31:28.095110967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:31:28.105421  223157 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:31:28.095110967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fal
se Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:31:28.105493  223157 network_create.go:284] running [docker network inspect ha-434755-m04] to gather additional debugging logs...
	I0919 22:31:28.105531  223157 cli_runner.go:164] Run: docker network inspect ha-434755-m04
	W0919 22:31:28.122455  223157 cli_runner.go:211] docker network inspect ha-434755-m04 returned with exit code 1
	I0919 22:31:28.122542  223157 network_create.go:287] error running [docker network inspect ha-434755-m04]: docker network inspect ha-434755-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755-m04 not found
	I0919 22:31:28.122564  223157 network_create.go:289] output of [docker network inspect ha-434755-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755-m04 not found
	
	** /stderr **
	I0919 22:31:28.122612  223157 client.go:171] duration metric: took 5.113066853s to LocalClient.Create
	I0919 22:31:30.123699  223157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:31:30.123760  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:30.142098  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:30.142238  223157 retry.go:31] will retry after 154.016354ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:30.296671  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:30.315678  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:30.315802  223157 retry.go:31] will retry after 208.876537ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:30.525244  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:30.542683  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:30.542792  223157 retry.go:31] will retry after 494.689932ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:31.038541  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:31.056010  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:31.056113  223157 retry.go:31] will retry after 730.624848ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:31.787027  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:31.806339  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:31:31.806519  223157 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:31:31.806540  223157 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:31.806595  223157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:31:31.806641  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:31.825278  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:31.825415  223157 retry.go:31] will retry after 331.756588ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:32.158007  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:32.176356  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:32.176511  223157 retry.go:31] will retry after 396.237501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:32.573119  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:32.590793  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:32.590935  223157 retry.go:31] will retry after 531.005125ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:33.122648  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:33.141946  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:31:33.142088  223157 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:31:33.142100  223157 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:33.142112  223157 start.go:128] duration metric: took 10.13441372s to createHost
	I0919 22:31:33.142121  223157 start.go:83] releasing machines lock for "ha-434755-m04", held for 10.134573677s
	W0919 22:31:33.142135  223157 start.go:714] error starting host: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:27.647189238Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:27.647237605Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:27.647246440Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:27.647248774Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:31:33.142557  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:33.160309  223157 stop.go:39] StopHost: ha-434755-m04
	W0919 22:31:33.160686  223157 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0919 22:31:33.162550  223157 out.go:179] * Stopping node "ha-434755-m04"  ...
	I0919 22:31:33.163665  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:33.180688  223157 stop.go:87] host is in state Stopped
	I0919 22:31:33.180790  223157 main.go:141] libmachine: Stopping "ha-434755-m04"...
	I0919 22:31:33.180852  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:33.197233  223157 stop.go:66] stop err: Machine "ha-434755-m04" is already stopped.
	I0919 22:31:33.197269  223157 stop.go:69] host is already stopped
	W0919 22:31:34.197740  223157 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0919 22:31:34.199289  223157 out.go:179] * Deleting "ha-434755-m04" in docker ...
	I0919 22:31:34.200384  223157 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-434755-m04
	I0919 22:31:34.218467  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:34.235757  223157 cli_runner.go:164] Run: docker exec --privileged -t ha-434755-m04 /bin/bash -c "sudo init 0"
	W0919 22:31:34.253874  223157 cli_runner.go:211] docker exec --privileged -t ha-434755-m04 /bin/bash -c "sudo init 0" returned with exit code 1
	I0919 22:31:34.253905  223157 oci.go:659] error shutdown ha-434755-m04: docker exec --privileged -t ha-434755-m04 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container f6f027606b09a10a81f2b6ea8f00fc63f38571bace9dd88c26c5f8b0328bcc6d is not running
	I0919 22:31:35.254736  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:35.273533  223157 oci.go:667] container ha-434755-m04 status is Stopped
	I0919 22:31:35.273565  223157 oci.go:679] Successfully shutdown container ha-434755-m04
	I0919 22:31:35.273609  223157 cli_runner.go:164] Run: docker rm -f -v ha-434755-m04
	I0919 22:31:35.294801  223157 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-434755-m04
	W0919 22:31:35.311150  223157 cli_runner.go:211] docker container inspect -f {{.Id}} ha-434755-m04 returned with exit code 1
	I0919 22:31:35.311282  223157 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:31:35.328508  223157 cli_runner.go:164] Run: docker network rm ha-434755
	W0919 22:31:35.345351  223157 cli_runner.go:211] docker network rm ha-434755 returned with exit code 1
	W0919 22:31:35.345466  223157 kic.go:390] failed to remove network (which might be okay) ha-434755: unable to delete a network that is attached to a running container
	W0919 22:31:35.345721  223157 out.go:285] ! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:27.647189238Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:27.647237605Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:27.647246440Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:27.647248774Z  Exiting PID 1...: container exited unexpectedly
	! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:27.647189238Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:27.647237605Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:27.647246440Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:27.647248774Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:31:35.345744  223157 start.go:729] Will try again in 5 seconds ...
	I0919 22:31:40.348460  223157 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:31:40.348568  223157 start.go:364] duration metric: took 74.873µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:31:40.348599  223157 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 22:31:40.348715  223157 start.go:125] createHost starting for "m04" (driver="docker")
	I0919 22:31:40.350327  223157 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:31:40.350426  223157 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:31:40.350454  223157 client.go:168] LocalClient.Create starting
	I0919 22:31:40.350519  223157 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:31:40.350562  223157 main.go:141] libmachine: Decoding PEM data...
	I0919 22:31:40.350577  223157 main.go:141] libmachine: Parsing certificate...
	I0919 22:31:40.350657  223157 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:31:40.350681  223157 main.go:141] libmachine: Decoding PEM data...
	I0919 22:31:40.350692  223157 main.go:141] libmachine: Parsing certificate...
	I0919 22:31:40.350904  223157 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:31:40.367385  223157 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc001513680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:31:40.367418  223157 kic.go:121] calculated static IP "192.168.49.5" for the "ha-434755-m04" container
	I0919 22:31:40.367474  223157 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:31:40.382841  223157 cli_runner.go:164] Run: docker volume create ha-434755-m04 --label name.minikube.sigs.k8s.io=ha-434755-m04 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:31:40.398562  223157 oci.go:103] Successfully created a docker volume ha-434755-m04
	I0919 22:31:40.398652  223157 cli_runner.go:164] Run: docker run --rm --name ha-434755-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m04 --entrypoint /usr/bin/test -v ha-434755-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:31:40.657253  223157 oci.go:107] Successfully prepared a docker volume ha-434755-m04
	I0919 22:31:40.657290  223157 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:31:40.657312  223157 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:31:40.657379  223157 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:31:44.521412  223157 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.863975868s)
	I0919 22:31:44.521453  223157 kic.go:203] duration metric: took 3.86413543s to extract preloaded images to volume ...
	W0919 22:31:44.521586  223157 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:31:44.521630  223157 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:31:44.521679  223157 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:31:44.577925  223157 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m04 --name ha-434755-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m04 --network ha-434755 --ip 192.168.49.5 --volume ha-434755-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:31:44.839700  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Running}}
	I0919 22:31:44.857959  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:44.875430  223157 cli_runner.go:164] Run: docker exec ha-434755-m04 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:31:44.927026  223157 oci.go:144] the created container "ha-434755-m04" has a running status.
	I0919 22:31:44.927073  223157 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa...
	I0919 22:31:45.434933  223157 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:31:45.434987  223157 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:31:45.467710  223157 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:45.486385  223157 cli_runner.go:164] Run: docker inspect ha-434755-m04
	I0919 22:31:45.503183  223157 errors.go:84] Postmortem inspect ("docker inspect ha-434755-m04"): -- stdout --
	[
	    {
	        "Id": "ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c",
	        "Created": "2025-09-19T22:31:44.593614033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:31:44.628806859Z",
	            "FinishedAt": "2025-09-19T22:31:44.959288871Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c/hosts",
	        "LogPath": "/var/lib/docker/containers/ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c/ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c-json.log",
	        "Name": "/ha-434755-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-434755-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ae8222f7b3503ad5946f124cf2a34c1ff1f3979ca198e08cb8b574d0b4cfc64c",
	                "LowerDir": "/var/lib/docker/overlay2/6d0788efbe65ff7a0537fffbd7858b54416c1e0f107efd4a0f51c3d58f23cdf7-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d0788efbe65ff7a0537fffbd7858b54416c1e0f107efd4a0f51c3d58f23cdf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d0788efbe65ff7a0537fffbd7858b54416c1e0f107efd4a0f51c3d58f23cdf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d0788efbe65ff7a0537fffbd7858b54416c1e0f107efd4a0f51c3d58f23cdf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-434755-m04",
	                "Source": "/var/lib/docker/volumes/ha-434755-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755-m04",
	                "name.minikube.sigs.k8s.io": "ha-434755-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755-m04",
	                        "ae8222f7b350"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0919 22:31:45.503272  223157 cli_runner.go:164] Run: docker logs --timestamps --details ha-434755-m04
	I0919 22:31:45.523037  223157 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-434755-m04"): -- stdout --
	2025-09-19T22:31:44.833591814Z  + userns=
	2025-09-19T22:31:44.833624195Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-19T22:31:44.836143205Z  + validate_userns
	2025-09-19T22:31:44.836163642Z  + [[ -z '' ]]
	2025-09-19T22:31:44.836167269Z  + return
	2025-09-19T22:31:44.836179327Z  + configure_containerd
	2025-09-19T22:31:44.836182468Z  + local snapshotter=
	2025-09-19T22:31:44.836185411Z  + [[ -n '' ]]
	2025-09-19T22:31:44.836188279Z  + [[ -z '' ]]
	2025-09-19T22:31:44.836677336Z  ++ stat -f -c %T /kind
	2025-09-19T22:31:44.838070269Z  + container_filesystem=overlayfs
	2025-09-19T22:31:44.838084364Z  + [[ overlayfs == \z\f\s ]]
	2025-09-19T22:31:44.838088045Z  + [[ -n '' ]]
	2025-09-19T22:31:44.838141436Z  + configure_proxy
	2025-09-19T22:31:44.838153716Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-19T22:31:44.842817015Z  + [[ ! -z '' ]]
	2025-09-19T22:31:44.842832754Z  + cat
	2025-09-19T22:31:44.844199772Z  + fix_mount
	2025-09-19T22:31:44.844213686Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-19T22:31:44.844216790Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-19T22:31:44.844669618Z  ++ which mount
	2025-09-19T22:31:44.846066356Z  ++ which umount
	2025-09-19T22:31:44.846938490Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-19T22:31:44.853356569Z  ++ which mount
	2025-09-19T22:31:44.854491454Z  ++ which umount
	2025-09-19T22:31:44.855477959Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-19T22:31:44.857098207Z  +++ which mount
	2025-09-19T22:31:44.857969233Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-19T22:31:44.859448668Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-19T22:31:44.859461204Z  + echo 'INFO: remounting /sys read-only'
	2025-09-19T22:31:44.859463420Z  INFO: remounting /sys read-only
	2025-09-19T22:31:44.859465292Z  + mount -o remount,ro /sys
	2025-09-19T22:31:44.861361849Z  + echo 'INFO: making mounts shared'
	2025-09-19T22:31:44.861370906Z  INFO: making mounts shared
	2025-09-19T22:31:44.861372945Z  + mount --make-rshared /
	2025-09-19T22:31:44.862705229Z  + retryable_fix_cgroup
	2025-09-19T22:31:44.863131325Z  ++ seq 0 10
	2025-09-19T22:31:44.864059311Z  + for i in $(seq 0 10)
	2025-09-19T22:31:44.864074970Z  + fix_cgroup
	2025-09-19T22:31:44.864078523Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-19T22:31:44.864133503Z  + echo 'INFO: detected cgroup v2'
	2025-09-19T22:31:44.864144947Z  INFO: detected cgroup v2
	2025-09-19T22:31:44.864161292Z  + return
	2025-09-19T22:31:44.864164708Z  + return
	2025-09-19T22:31:44.864167549Z  + fix_machine_id
	2025-09-19T22:31:44.864173813Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-19T22:31:44.864176692Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-19T22:31:44.864179455Z  + rm -f /etc/machine-id
	2025-09-19T22:31:44.865213207Z  + systemd-machine-id-setup
	2025-09-19T22:31:44.868543676Z  Initializing machine ID from random generator.
	2025-09-19T22:31:44.870487565Z  + fix_product_name
	2025-09-19T22:31:44.870518200Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-19T22:31:44.870556563Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-19T22:31:44.870569720Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-19T22:31:44.870573228Z  + echo kind
	2025-09-19T22:31:44.871533932Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-19T22:31:44.873186791Z  + fix_product_uuid
	2025-09-19T22:31:44.873199537Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-19T22:31:44.873202733Z  + cat /proc/sys/kernel/random/uuid
	2025-09-19T22:31:44.874258042Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-19T22:31:44.874272702Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-19T22:31:44.874276327Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-19T22:31:44.874279177Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-19T22:31:44.875583046Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-19T22:31:44.875596418Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-19T22:31:44.875599761Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-19T22:31:44.875602749Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-19T22:31:44.876935153Z  + select_iptables
	2025-09-19T22:31:44.876946789Z  + local mode num_legacy_lines num_nft_lines
	2025-09-19T22:31:44.877796665Z  ++ grep -c '^-'
	2025-09-19T22:31:44.880550094Z  ++ true
	2025-09-19T22:31:44.880773819Z  + num_legacy_lines=0
	2025-09-19T22:31:44.881667440Z  ++ grep -c '^-'
	2025-09-19T22:31:44.887545713Z  + num_nft_lines=6
	2025-09-19T22:31:44.887561768Z  + '[' 0 -ge 6 ']'
	2025-09-19T22:31:44.887565778Z  + mode=nft
	2025-09-19T22:31:44.887568381Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-19T22:31:44.887571528Z  INFO: setting iptables to detected mode: nft
	2025-09-19T22:31:44.887574393Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:31:44.887610836Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:31:44.887621636Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:31:44.888100809Z  ++ seq 0 15
	2025-09-19T22:31:44.889097771Z  + for i in $(seq 0 15)
	2025-09-19T22:31:44.889112204Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:31:44.890258118Z  + return
	2025-09-19T22:31:44.890272422Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:31:44.890275836Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:31:44.890278861Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:31:44.890712945Z  ++ seq 0 15
	2025-09-19T22:31:44.891721547Z  + for i in $(seq 0 15)
	2025-09-19T22:31:44.891733944Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:31:44.892900904Z  + return
	2025-09-19T22:31:44.892998068Z  + enable_network_magic
	2025-09-19T22:31:44.893008283Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-19T22:31:44.893011334Z  + local docker_host_ip
	2025-09-19T22:31:44.894347158Z  ++ cut '-d ' -f1
	2025-09-19T22:31:44.894959905Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:31:44.894995834Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-19T22:31:44.918386276Z  + docker_host_ip=
	2025-09-19T22:31:44.918406653Z  + [[ -z '' ]]
	2025-09-19T22:31:44.919064921Z  ++ ip -4 route show default
	2025-09-19T22:31:44.919563222Z  ++ cut '-d ' -f3
	2025-09-19T22:31:44.921876401Z  + docker_host_ip=192.168.49.1
	2025-09-19T22:31:44.922010604Z  + iptables-save
	2025-09-19T22:31:44.922884771Z  + iptables-restore
	2025-09-19T22:31:44.925274177Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-19T22:31:44.935900369Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-19T22:31:44.937601130Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-19T22:31:44.938746383Z  + replaced='# Generated by Docker Engine.
	2025-09-19T22:31:44.938761749Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:31:44.938765308Z  # has been modified.
	2025-09-19T22:31:44.938768136Z  
	2025-09-19T22:31:44.938771002Z  nameserver 192.168.49.1
	2025-09-19T22:31:44.938773927Z  search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:31:44.938777060Z  options edns0 trust-ad ndots:0
	2025-09-19T22:31:44.938789734Z  
	2025-09-19T22:31:44.938792503Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:31:44.938795512Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:31:44.938798307Z  # Overrides: []
	2025-09-19T22:31:44.938800972Z  # Option ndots from: internal'
	2025-09-19T22:31:44.938803744Z  + [[ '' == '' ]]
	2025-09-19T22:31:44.938806479Z  + echo '# Generated by Docker Engine.
	2025-09-19T22:31:44.938809270Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:31:44.938812254Z  # has been modified.
	2025-09-19T22:31:44.938814997Z  
	2025-09-19T22:31:44.938817602Z  nameserver 192.168.49.1
	2025-09-19T22:31:44.938820440Z  search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:31:44.938823412Z  options edns0 trust-ad ndots:0
	2025-09-19T22:31:44.938826147Z  
	2025-09-19T22:31:44.938828722Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:31:44.938831714Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:31:44.938834515Z  # Overrides: []
	2025-09-19T22:31:44.938837274Z  # Option ndots from: internal'
	2025-09-19T22:31:44.938968034Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-19T22:31:44.938980949Z  + local files_to_update
	2025-09-19T22:31:44.938984576Z  + local should_fix_certificate=false
	2025-09-19T22:31:44.940113656Z  ++ cut '-d ' -f1
	2025-09-19T22:31:44.940216978Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:31:44.940706870Z  ++++ hostname
	2025-09-19T22:31:44.941573613Z  +++ timeout 5 getent ahostsv4 ha-434755-m04
	2025-09-19T22:31:44.944080523Z  + curr_ipv4=192.168.49.5
	2025-09-19T22:31:44.944096173Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-19T22:31:44.944100015Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-19T22:31:44.944102973Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-19T22:31:44.944105737Z  + [[ -n 192.168.49.5 ]]
	2025-09-19T22:31:44.944108659Z  + echo -n 192.168.49.5
	2025-09-19T22:31:44.945288304Z  ++ cut '-d ' -f1
	2025-09-19T22:31:44.945410366Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:31:44.945957851Z  ++++ hostname
	2025-09-19T22:31:44.946692488Z  +++ timeout 5 getent ahostsv6 ha-434755-m04
	2025-09-19T22:31:44.948971744Z  + curr_ipv6=
	2025-09-19T22:31:44.948986297Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-19T22:31:44.948998324Z  INFO: Detected IPv6 address: 
	2025-09-19T22:31:44.949001636Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-19T22:31:44.949005890Z  + [[ -n '' ]]
	2025-09-19T22:31:44.949008753Z  + false
	2025-09-19T22:31:44.949430604Z  ++ uname -a
	2025-09-19T22:31:44.950191579Z  + echo 'entrypoint completed: Linux ha-434755-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-19T22:31:44.950201015Z  entrypoint completed: Linux ha-434755-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-19T22:31:44.950204287Z  + exec /sbin/init
	2025-09-19T22:31:44.956315162Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-19T22:31:44.956327092Z  Detected virtualization docker.
	2025-09-19T22:31:44.956330272Z  Detected architecture x86-64.
	2025-09-19T22:31:44.956434046Z  
	2025-09-19T22:31:44.956444731Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-19T22:31:44.956448282Z  
	2025-09-19T22:31:44.956847714Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:44.956856407Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:44.956887881Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:44.956897469Z  Exiting PID 1...
	
	-- /stdout --
	I0919 22:31:45.523115  223157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:31:45.575257  223157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:31:45.565467868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:31:45.575348  223157 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:31:45.565467868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fal
se Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:31:45.575451  223157 network_create.go:284] running [docker network inspect ha-434755-m04] to gather additional debugging logs...
	I0919 22:31:45.575476  223157 cli_runner.go:164] Run: docker network inspect ha-434755-m04
	W0919 22:31:45.591990  223157 cli_runner.go:211] docker network inspect ha-434755-m04 returned with exit code 1
	I0919 22:31:45.592018  223157 network_create.go:287] error running [docker network inspect ha-434755-m04]: docker network inspect ha-434755-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755-m04 not found
	I0919 22:31:45.592031  223157 network_create.go:289] output of [docker network inspect ha-434755-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755-m04 not found
	
	** /stderr **
	I0919 22:31:45.592096  223157 client.go:171] duration metric: took 5.241635917s to LocalClient.Create
	I0919 22:31:47.593298  223157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:31:47.593353  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:47.611407  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:47.611560  223157 retry.go:31] will retry after 343.719016ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:47.956075  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:47.974782  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:47.974906  223157 retry.go:31] will retry after 435.603017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:48.411514  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:48.429389  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:48.429520  223157 retry.go:31] will retry after 560.649504ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:48.990940  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:49.009738  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:31:49.009878  223157 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:31:49.009902  223157 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:49.009953  223157 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:31:49.010015  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:49.027018  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:49.027141  223157 retry.go:31] will retry after 235.221649ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:49.262599  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:49.279529  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:49.279645  223157 retry.go:31] will retry after 458.171682ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:49.738305  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:49.757633  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:31:49.757769  223157 retry.go:31] will retry after 621.81798ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:50.380034  223157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:31:50.397436  223157 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:31:50.397582  223157 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:31:50.397604  223157 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:31:50.397627  223157 start.go:128] duration metric: took 10.048905076s to createHost
	I0919 22:31:50.397639  223157 start.go:83] releasing machines lock for "ha-434755-m04", held for 10.049059175s
	W0919 22:31:50.397767  223157 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:44.956847714Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:44.956856407Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:44.956887881Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:44.956897469Z  Exiting PID 1...: container exited unexpectedly
	* Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:44.956847714Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:44.956856407Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:44.956887881Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:44.956897469Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:31:50.400001  223157 out.go:203] 
	W0919 22:31:50.401057  223157 out.go:285] X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:44.956847714Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:44.956856407Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:44.956887881Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:44.956897469Z  Exiting PID 1...: container exited unexpectedly
	X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-434755-m04" state Stopped: log: 2025-09-19T22:31:44.956847714Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:31:44.956856407Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:31:44.956887881Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:31:44.956897469Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:31:50.402214  223157 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-434755 node add --alsologtostderr -v 5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-434755 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- rollout status deployment/busybox                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ node    │ ha-434755 node add --alsologtostderr -v 5                                                                                 │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              6 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     7 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m4s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m4s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m7s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m4s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m2s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  7m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m14s (x8 over 7m15s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s (x8 over 7m15s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m14s (x7 over 7m15s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m7s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m7s                   kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m7s                   kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m7s                   kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m5s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m36s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m34s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m34s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m20s  kube-proxy       
	  Normal  RegisteredNode  6m31s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m30s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m14s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m8s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m13s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  6m11s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m10s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m9s   node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.314829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315431Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:25:32.315457Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315465Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.351210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.354520Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6956058400243883992 12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479741Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a99fbed258953a7f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.262846ms"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479818Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6088e2429f689fd8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.344681ms"}
	{"level":"info","ts":"2025-09-19T22:31:25.543409Z","caller":"traceutil/trace.go:172","msg":"trace[1476697735] linearizableReadLoop","detail":"{readStateIndex:2212; appliedIndex:2212; }","duration":"122.469916ms","start":"2025-09-19T22:31:25.420904Z","end":"2025-09-19T22:31:25.543374Z","steps":["trace[1476697735] 'read index received'  (duration: 122.461259ms)","trace[1476697735] 'applied index is now lower than readState.Index'  (duration: 7.407µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:31:25.545247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.309293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:25.545343Z","caller":"traceutil/trace.go:172","msg":"trace[1198199391] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1836; }","duration":"124.432545ms","start":"2025-09-19T22:31:25.420893Z","end":"2025-09-19T22:31:25.545326Z","steps":["trace[1198199391] 'agreement among raft nodes before linearized reading'  (duration: 122.582946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:26.310807Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.705072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:26.310897Z","caller":"traceutil/trace.go:172","msg":"trace[2094450770] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1839; }","duration":"182.81062ms","start":"2025-09-19T22:31:26.128070Z","end":"2025-09-19T22:31:26.310880Z","steps":["trace[2094450770] 'range keys from in-memory index tree'  (duration: 182.279711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:27.082780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.669043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082613715695 > lease_revoke:<id:70cc99641453c257>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:31:27.178782Z","caller":"traceutil/trace.go:172","msg":"trace[2040827292] transaction","detail":"{read_only:false; response_revision:1841; number_of_response:1; }","duration":"161.541003ms","start":"2025-09-19T22:31:27.017222Z","end":"2025-09-19T22:31:27.178763Z","steps":["trace[2040827292] 'process raft request'  (duration: 161.420124ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.889764Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.078552ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:43.889838Z","caller":"traceutil/trace.go:172","msg":"trace[1908677250] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1879; }","duration":"108.172765ms","start":"2025-09-19T22:31:43.781651Z","end":"2025-09-19T22:31:43.889824Z","steps":["trace[1908677250] 'range keys from in-memory index tree'  (duration: 108.036209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.890177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.618892ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4215256431365582417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:1856 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:4215256431365582413 >> failure:<>>","response":"size:16"}
	
	
	==> kernel <==
	 22:31:51 up  1:14,  0 users,  load average: 1.28, 3.14, 24.37
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:31:03.791909       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:13.794575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:13.794625       1 main.go:301] handling current node
	I0919 22:31:13.794642       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:13.794647       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:13.794848       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:13.794863       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:23.791602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:23.791646       1 main.go:301] handling current node
	I0919 22:31:23.791664       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:23.791670       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:23.791897       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:23.791911       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:33.800280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:33.800319       1 main.go:301] handling current node
	I0919 22:31:33.800338       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:33.800343       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:33.800580       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:33.800596       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800572       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:43.800609       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:43.800843       1 main.go:301] handling current node
	I0919 22:31:43.800858       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:43.800864       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:47.036591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.041406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.734451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (29.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:309: expected profile "ha-434755" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-434755\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-434755\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",
\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-434755\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\"
:\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-d
river-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimi
zations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-434755 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- rollout status deployment/busybox                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:29 UTC │ 19 Sep 25 22:29 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:30 UTC │ 19 Sep 25 22:30 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.io                                              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default                                         │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-rhlg4 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-v7khr -- nslookup kubernetes.default.svc.cluster.local                       │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ kubectl │ ha-434755 kubectl -- exec busybox-7b57f96db7-c67nh -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ node    │ ha-434755 node add --alsologtostderr -v 5                                                                                 │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              7 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     7 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m7s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m7s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m10s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m7s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m17s (x8 over 7m18s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s (x8 over 7m18s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s (x7 over 7m18s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m39s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m37s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m37s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m23s  kube-proxy       
	  Normal  RegisteredNode  6m34s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m33s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m17s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:31:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m11s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m16s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  6m14s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m13s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m12s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.314829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315431Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:25:32.315457Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315465Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.351210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.354520Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6956058400243883992 12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479741Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a99fbed258953a7f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.262846ms"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479818Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6088e2429f689fd8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.344681ms"}
	{"level":"info","ts":"2025-09-19T22:31:25.543409Z","caller":"traceutil/trace.go:172","msg":"trace[1476697735] linearizableReadLoop","detail":"{readStateIndex:2212; appliedIndex:2212; }","duration":"122.469916ms","start":"2025-09-19T22:31:25.420904Z","end":"2025-09-19T22:31:25.543374Z","steps":["trace[1476697735] 'read index received'  (duration: 122.461259ms)","trace[1476697735] 'applied index is now lower than readState.Index'  (duration: 7.407µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:31:25.545247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.309293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:25.545343Z","caller":"traceutil/trace.go:172","msg":"trace[1198199391] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1836; }","duration":"124.432545ms","start":"2025-09-19T22:31:25.420893Z","end":"2025-09-19T22:31:25.545326Z","steps":["trace[1198199391] 'agreement among raft nodes before linearized reading'  (duration: 122.582946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:26.310807Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.705072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:26.310897Z","caller":"traceutil/trace.go:172","msg":"trace[2094450770] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1839; }","duration":"182.81062ms","start":"2025-09-19T22:31:26.128070Z","end":"2025-09-19T22:31:26.310880Z","steps":["trace[2094450770] 'range keys from in-memory index tree'  (duration: 182.279711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:27.082780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.669043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082613715695 > lease_revoke:<id:70cc99641453c257>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:31:27.178782Z","caller":"traceutil/trace.go:172","msg":"trace[2040827292] transaction","detail":"{read_only:false; response_revision:1841; number_of_response:1; }","duration":"161.541003ms","start":"2025-09-19T22:31:27.017222Z","end":"2025-09-19T22:31:27.178763Z","steps":["trace[2040827292] 'process raft request'  (duration: 161.420124ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.889764Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.078552ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:43.889838Z","caller":"traceutil/trace.go:172","msg":"trace[1908677250] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1879; }","duration":"108.172765ms","start":"2025-09-19T22:31:43.781651Z","end":"2025-09-19T22:31:43.889824Z","steps":["trace[1908677250] 'range keys from in-memory index tree'  (duration: 108.036209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.890177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.618892ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4215256431365582417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:1856 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:4215256431365582413 >> failure:<>>","response":"size:16"}
	
	
	==> kernel <==
	 22:31:54 up  1:14,  0 users,  load average: 1.28, 3.14, 24.37
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:31:13.794863       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:23.791602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:23.791646       1 main.go:301] handling current node
	I0919 22:31:23.791664       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:23.791670       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:23.791897       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:23.791911       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:33.800280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:33.800319       1 main.go:301] handling current node
	I0919 22:31:33.800338       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:33.800343       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:33.800580       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:33.800596       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800572       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:43.800609       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:43.800843       1 main.go:301] handling current node
	I0919 22:31:43.800858       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:43.800864       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:53.791616       1 main.go:301] handling current node
	I0919 22:31:53.791632       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:53.791637       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791836       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:53.791852       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:47.036591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.041406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.734451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --output json --alsologtostderr -v 5: exit status 7 (704.578602ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-434755","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-434755-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-434755-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-434755-m04","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:31:54.987633  227028 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:31:54.987916  227028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:31:54.987927  227028 out.go:374] Setting ErrFile to fd 2...
	I0919 22:31:54.987931  227028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:31:54.988139  227028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:31:54.988306  227028 out.go:368] Setting JSON to true
	I0919 22:31:54.988326  227028 mustload.go:65] Loading cluster: ha-434755
	I0919 22:31:54.988371  227028 notify.go:220] Checking for updates...
	I0919 22:31:54.988737  227028 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:31:54.988767  227028 status.go:174] checking status of ha-434755 ...
	I0919 22:31:54.989246  227028 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:31:55.010262  227028 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:31:55.010290  227028 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:31:55.010623  227028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:31:55.027918  227028 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:31:55.028205  227028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:31:55.028245  227028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:31:55.045593  227028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:31:55.139735  227028 ssh_runner.go:195] Run: systemctl --version
	I0919 22:31:55.143950  227028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:31:55.156226  227028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:31:55.209807  227028 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:31:55.200322254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:31:55.210419  227028 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:31:55.210465  227028 api_server.go:166] Checking apiserver status ...
	I0919 22:31:55.210520  227028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:31:55.224562  227028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:31:55.234579  227028 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:31:55.234641  227028 ssh_runner.go:195] Run: ls
	I0919 22:31:55.238184  227028 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:31:55.243643  227028 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:31:55.243665  227028 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:31:55.243678  227028 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:31:55.243702  227028 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:31:55.243981  227028 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:31:55.260701  227028 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:31:55.260722  227028 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:31:55.260973  227028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:31:55.278353  227028 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:31:55.278663  227028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:31:55.278705  227028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:31:55.295979  227028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:31:55.394680  227028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:31:55.406393  227028 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:31:55.406423  227028 api_server.go:166] Checking apiserver status ...
	I0919 22:31:55.406475  227028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:31:55.419008  227028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2188/cgroup
	W0919 22:31:55.428199  227028 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2188/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:31:55.428246  227028 ssh_runner.go:195] Run: ls
	I0919 22:31:55.431570  227028 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:31:55.436461  227028 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:31:55.436485  227028 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:31:55.436510  227028 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:31:55.436531  227028 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:31:55.436766  227028 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:31:55.455761  227028 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:31:55.455783  227028 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:31:55.456031  227028 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:31:55.472840  227028 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:31:55.473103  227028 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:31:55.473145  227028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:31:55.489606  227028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:31:55.581671  227028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:31:55.593595  227028 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:31:55.593621  227028 api_server.go:166] Checking apiserver status ...
	I0919 22:31:55.593651  227028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:31:55.605388  227028 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:31:55.616044  227028 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:31:55.616102  227028 ssh_runner.go:195] Run: ls
	I0919 22:31:55.619669  227028 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:31:55.623691  227028 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:31:55.623716  227028 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:31:55.623727  227028 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:31:55.623747  227028 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:31:55.624079  227028 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:31:55.642337  227028 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:31:55.642353  227028 status.go:384] host is not running, skipping remaining checks
	I0919 22:31:55.642359  227028 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp testdata/cp-test.txt ha-434755:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755_ha-434755-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test_ha-434755_ha-434755-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755_ha-434755-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test_ha-434755_ha-434755-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755_ha-434755-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755_ha-434755-m04.txt: exit status 1 (137.365036ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755_ha-434755-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test_ha-434755_ha-434755-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test_ha-434755_ha-434755-m04.txt": exit status 1 (137.99629ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test_ha-434755_ha-434755-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp testdata/cp-test.txt ha-434755-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m02:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m02_ha-434755.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test_ha-434755-m02_ha-434755.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m02:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m02_ha-434755-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test_ha-434755-m02_ha-434755-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m02:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m02_ha-434755-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m02:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m02_ha-434755-m04.txt: exit status 1 (138.794138ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m02:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m02_ha-434755-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test_ha-434755-m02_ha-434755-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test_ha-434755-m02_ha-434755-m04.txt": exit status 1 (140.68887ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test_ha-434755-m02_ha-434755-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp testdata/cp-test.txt ha-434755-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m03_ha-434755.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt: exit status 1 (142.237599ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt": exit status 1 (141.188142ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt: exit status 1 (137.996308ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (141.35127ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"Test file for checking file cp process",
+ 	"",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt: exit status 1 (136.632766ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (138.468835ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:545: failed to read test file 'testdata/cp-test.txt' : open /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt: no such file or directory
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt: exit status 1 (156.862947ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (141.206503ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 "sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt": exit status 1 (254.075708ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-434755-m04_ha-434755.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755 \"sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-434755-m04_ha-434755.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt: exit status 1 (158.149774ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (140.912897ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 "sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt": exit status 1 (262.83871ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m02 \"sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt: No such file or directory\r\n",
)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt: exit status 1 (158.573371ms)

                                                
                                                
** stderr ** 
	getting host: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (138.482073ms)

                                                
                                                
** stderr ** 
	ssh: "ha-434755-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 "sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt": exit status 1 (249.828754ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-434755 ssh -n ha-434755-m03 \"sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt: No such file or directory\r\n",
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m03.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m03_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              7 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     7 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m22s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m22s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m25s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m22s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m19s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m32s (x8 over 7m33s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s (x8 over 7m33s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s (x7 over 7m33s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m25s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m25s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m23s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m54s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m32s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m52s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m52s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m38s  kube-proxy       
	  Normal  RegisteredNode  6m49s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m48s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m32s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m26s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m31s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  6m29s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m28s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m27s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.314829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315431Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"warn","ts":"2025-09-19T22:25:32.315457Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.315465Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.351210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.354520Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(6956058400243883992 12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479741Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a99fbed258953a7f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.262846ms"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479818Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6088e2429f689fd8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.344681ms"}
	{"level":"info","ts":"2025-09-19T22:31:25.543409Z","caller":"traceutil/trace.go:172","msg":"trace[1476697735] linearizableReadLoop","detail":"{readStateIndex:2212; appliedIndex:2212; }","duration":"122.469916ms","start":"2025-09-19T22:31:25.420904Z","end":"2025-09-19T22:31:25.543374Z","steps":["trace[1476697735] 'read index received'  (duration: 122.461259ms)","trace[1476697735] 'applied index is now lower than readState.Index'  (duration: 7.407µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:31:25.545247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.309293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:25.545343Z","caller":"traceutil/trace.go:172","msg":"trace[1198199391] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1836; }","duration":"124.432545ms","start":"2025-09-19T22:31:25.420893Z","end":"2025-09-19T22:31:25.545326Z","steps":["trace[1198199391] 'agreement among raft nodes before linearized reading'  (duration: 122.582946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:26.310807Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.705072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:26.310897Z","caller":"traceutil/trace.go:172","msg":"trace[2094450770] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1839; }","duration":"182.81062ms","start":"2025-09-19T22:31:26.128070Z","end":"2025-09-19T22:31:26.310880Z","steps":["trace[2094450770] 'range keys from in-memory index tree'  (duration: 182.279711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:27.082780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.669043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082613715695 > lease_revoke:<id:70cc99641453c257>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:31:27.178782Z","caller":"traceutil/trace.go:172","msg":"trace[2040827292] transaction","detail":"{read_only:false; response_revision:1841; number_of_response:1; }","duration":"161.541003ms","start":"2025-09-19T22:31:27.017222Z","end":"2025-09-19T22:31:27.178763Z","steps":["trace[2040827292] 'process raft request'  (duration: 161.420124ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.889764Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.078552ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:43.889838Z","caller":"traceutil/trace.go:172","msg":"trace[1908677250] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1879; }","duration":"108.172765ms","start":"2025-09-19T22:31:43.781651Z","end":"2025-09-19T22:31:43.889824Z","steps":["trace[1908677250] 'range keys from in-memory index tree'  (duration: 108.036209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.890177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.618892ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4215256431365582417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:1856 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:4215256431365582413 >> failure:<>>","response":"size:16"}
	
	
	==> kernel <==
	 22:32:09 up  1:14,  0 users,  load average: 1.44, 3.08, 24.01
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:31:23.791911       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:33.800280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:33.800319       1 main.go:301] handling current node
	I0919 22:31:33.800338       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:33.800343       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:33.800580       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:33.800596       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800572       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:43.800609       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:43.800843       1 main.go:301] handling current node
	I0919 22:31:43.800858       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:43.800864       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:53.791616       1 main.go:301] handling current node
	I0919 22:31:53.791632       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:53.791637       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791836       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:53.791852       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:03.792099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:03.792135       1 main.go:301] handling current node
	I0919 22:32:03.792151       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:32:03.792156       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:32:03.792364       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:32:03.792377       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:47.036591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.041406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:24:47.734451       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (15.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 node stop m02 --alsologtostderr -v 5: (10.74768927s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (539.619086ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:32:20.833721  234076 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:32:20.833879  234076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:20.833888  234076 out.go:374] Setting ErrFile to fd 2...
	I0919 22:32:20.833893  234076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:20.834062  234076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:32:20.834217  234076 out.go:368] Setting JSON to false
	I0919 22:32:20.834237  234076 mustload.go:65] Loading cluster: ha-434755
	I0919 22:32:20.834297  234076 notify.go:220] Checking for updates...
	I0919 22:32:20.834650  234076 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:32:20.834674  234076 status.go:174] checking status of ha-434755 ...
	I0919 22:32:20.835101  234076 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:32:20.856097  234076 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:32:20.856132  234076 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:32:20.856369  234076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:32:20.873214  234076 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:32:20.873468  234076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:32:20.873548  234076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:32:20.891401  234076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:32:20.984321  234076 ssh_runner.go:195] Run: systemctl --version
	I0919 22:32:20.989245  234076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:32:21.001057  234076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:32:21.056487  234076 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:32:21.045838492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:32:21.057062  234076 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:32:21.057089  234076 api_server.go:166] Checking apiserver status ...
	I0919 22:32:21.057128  234076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:32:21.069966  234076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:32:21.080084  234076 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:32:21.080145  234076 ssh_runner.go:195] Run: ls
	I0919 22:32:21.083725  234076 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:32:21.089619  234076 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:32:21.089645  234076 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:32:21.089659  234076 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:32:21.089678  234076 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:32:21.089910  234076 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:32:21.108873  234076 status.go:371] ha-434755-m02 host status = "Stopped" (err=<nil>)
	I0919 22:32:21.108903  234076 status.go:384] host is not running, skipping remaining checks
	I0919 22:32:21.108912  234076 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:32:21.108940  234076 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:32:21.109268  234076 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:32:21.129116  234076 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:32:21.129173  234076 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:32:21.129567  234076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:32:21.147109  234076 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:32:21.147408  234076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:32:21.147455  234076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:32:21.164412  234076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:32:21.257831  234076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:32:21.270029  234076 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:32:21.270056  234076 api_server.go:166] Checking apiserver status ...
	I0919 22:32:21.270091  234076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:32:21.282259  234076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:32:21.293372  234076 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:32:21.293448  234076 ssh_runner.go:195] Run: ls
	I0919 22:32:21.298040  234076 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:32:21.303402  234076 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:32:21.303435  234076 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:32:21.303448  234076 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:32:21.303468  234076 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:32:21.303746  234076 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:32:21.322222  234076 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:32:21.322245  234076 status.go:384] host is not running, skipping remaining checks
	I0919 22:32:21.322252  234076 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5": ha-434755
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-434755-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-434755-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-434755-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5": ha-434755
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-434755-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-434755-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-434755-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m03.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m03_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              7 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     7 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m35s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m35s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m38s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m35s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m45s (x8 over 7m46s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s (x8 over 7m46s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s (x7 over 7m46s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m36s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m45s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m5s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m5s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m51s  kube-proxy       
	  Normal  RegisteredNode  7m2s   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  7m1s   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m45s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m39s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m44s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  6m42s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m41s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m40s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:25:32.514484Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:32.514566Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:25:34.029285Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"a99fbed258953a7f","bytes":933879,"size":"934 kB","took":"30.016077713s"}
	{"level":"info","ts":"2025-09-19T22:25:38.912832Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:25:44.676267Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:26:02.284428Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"6088e2429f689fd8","bytes":1475095,"size":"1.5 MB","took":"30.016313758s"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479741Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"a99fbed258953a7f","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.262846ms"}
	{"level":"warn","ts":"2025-09-19T22:31:25.479818Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6088e2429f689fd8","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"14.344681ms"}
	{"level":"info","ts":"2025-09-19T22:31:25.543409Z","caller":"traceutil/trace.go:172","msg":"trace[1476697735] linearizableReadLoop","detail":"{readStateIndex:2212; appliedIndex:2212; }","duration":"122.469916ms","start":"2025-09-19T22:31:25.420904Z","end":"2025-09-19T22:31:25.543374Z","steps":["trace[1476697735] 'read index received'  (duration: 122.461259ms)","trace[1476697735] 'applied index is now lower than readState.Index'  (duration: 7.407µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:31:25.545247Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.309293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:25.545343Z","caller":"traceutil/trace.go:172","msg":"trace[1198199391] range","detail":"{range_begin:/registry/statefulsets; range_end:; response_count:0; response_revision:1836; }","duration":"124.432545ms","start":"2025-09-19T22:31:25.420893Z","end":"2025-09-19T22:31:25.545326Z","steps":["trace[1198199391] 'agreement among raft nodes before linearized reading'  (duration: 122.582946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:26.310807Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.705072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:26.310897Z","caller":"traceutil/trace.go:172","msg":"trace[2094450770] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1839; }","duration":"182.81062ms","start":"2025-09-19T22:31:26.128070Z","end":"2025-09-19T22:31:26.310880Z","steps":["trace[2094450770] 'range keys from in-memory index tree'  (duration: 182.279711ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:27.082780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.669043ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040082613715695 > lease_revoke:<id:70cc99641453c257>","response":"size:29"}
	{"level":"info","ts":"2025-09-19T22:31:27.178782Z","caller":"traceutil/trace.go:172","msg":"trace[2040827292] transaction","detail":"{read_only:false; response_revision:1841; number_of_response:1; }","duration":"161.541003ms","start":"2025-09-19T22:31:27.017222Z","end":"2025-09-19T22:31:27.178763Z","steps":["trace[2040827292] 'process raft request'  (duration: 161.420124ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.889764Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.078552ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:31:43.889838Z","caller":"traceutil/trace.go:172","msg":"trace[1908677250] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1879; }","duration":"108.172765ms","start":"2025-09-19T22:31:43.781651Z","end":"2025-09-19T22:31:43.889824Z","steps":["trace[1908677250] 'range keys from in-memory index tree'  (duration: 108.036209ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:31:43.890177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.618892ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4215256431365582417 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:1856 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:4215256431365582413 >> failure:<>>","response":"size:16"}
	{"level":"warn","ts":"2025-09-19T22:32:17.227641Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:32:17.227889Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:32:17.233900Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a99fbed258953a7f","error":"failed to dial a99fbed258953a7f on stream Message (EOF)"}
	{"level":"warn","ts":"2025-09-19T22:32:17.483999Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:19.102885Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:19.102942Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:22.186254Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	
	
	==> kernel <==
	 22:32:22 up  1:14,  0 users,  load average: 1.20, 2.94, 23.63
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:31:33.800596       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800572       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:43.800609       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:31:43.800828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:43.800843       1 main.go:301] handling current node
	I0919 22:31:43.800858       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:43.800864       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:53.791616       1 main.go:301] handling current node
	I0919 22:31:53.791632       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:53.791637       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791836       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:53.791852       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:03.792099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:03.792135       1 main.go:301] handling current node
	I0919 22:32:03.792151       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:32:03.792156       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:32:03.792364       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:32:03.792377       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:13.792555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:13.792593       1 main.go:301] handling current node
	I0919 22:32:13.792634       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:32:13.792644       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:32:13.792856       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:32:13.792870       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	I0919 22:32:12.547033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:15.112848       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:32:21.329211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (13.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-434755" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-434755\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-434755\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-434755\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidi
a-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOpt
imizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
E0919 22:32:25.092363  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m03.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m03_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   2 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         7 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              7 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         7 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     7 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         7 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         7 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         7 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         7 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m37s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m37s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m40s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m37s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m35s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m47s (x8 over 7m48s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m47s (x8 over 7m48s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m47s (x7 over 7m48s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m40s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m38s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           7m9s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m47s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f074940c6024fccb9ca090ae79eac96
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m7s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m7s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m54s  kube-proxy       
	  Normal  RegisteredNode  7m4s   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  7m3s   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode  6m47s  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:32:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m41s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m46s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  6m44s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m43s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  6m42s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"warn","ts":"2025-09-19T22:32:24.110163Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.128897Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.209858Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.309670Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.399281Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.409917Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.509925Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.610547Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.709586Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.725737Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.728252Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.731837Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.736237Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.740475Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.740653Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.745553Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.749627Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.753785Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.757657Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.759981Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.761867Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.764273Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.768682Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.775258Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:32:24.782336Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:32:24 up  1:14,  0 users,  load average: 1.20, 2.94, 23.63
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:31:43.800864       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791584       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:53.791616       1 main.go:301] handling current node
	I0919 22:31:53.791632       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:31:53.791637       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:31:53.791836       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:31:53.791852       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:03.792099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:03.792135       1 main.go:301] handling current node
	I0919 22:32:03.792151       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:32:03.792156       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:32:03.792364       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:32:03.792377       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:13.792555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:13.792593       1 main.go:301] handling current node
	I0919 22:32:13.792634       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:32:13.792644       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:32:13.792856       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:32:13.792870       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:23.800596       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:32:23.800631       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:32:23.800868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:23.800883       1 main.go:301] handling current node
	I0919 22:32:23.800896       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:32:23.800900       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:24:47.782975       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:25:42.022930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	I0919 22:32:12.547033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:15.112848       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:32:21.329211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (88.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 node start m02 --alsologtostderr -v 5: (36.848835577s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (804.112322ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:02.424119  245424 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:02.424409  245424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:02.424422  245424 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:02.424428  245424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:02.424706  245424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:02.424904  245424 out.go:368] Setting JSON to false
	I0919 22:33:02.424929  245424 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:02.425017  245424 notify.go:220] Checking for updates...
	I0919 22:33:02.425361  245424 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:02.425395  245424 status.go:174] checking status of ha-434755 ...
	I0919 22:33:02.425921  245424 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:02.458367  245424 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:02.458413  245424 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:02.458810  245424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:02.476681  245424 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:02.476966  245424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:02.477016  245424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:02.496452  245424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:02.593772  245424 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:02.599056  245424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:02.615570  245424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:02.707533  245424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:02.694091013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:02.708435  245424 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:02.708487  245424 api_server.go:166] Checking apiserver status ...
	I0919 22:33:02.708568  245424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:02.728432  245424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:02.741009  245424 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:02.741071  245424 ssh_runner.go:195] Run: ls
	I0919 22:33:02.745739  245424 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:02.750986  245424 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:02.751014  245424 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:02.751027  245424 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:02.751045  245424 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:02.751364  245424 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:02.772190  245424 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:02.772222  245424 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:02.772574  245424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:02.792867  245424 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:02.793224  245424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:02.793290  245424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:02.814066  245424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:02.913084  245424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:02.926057  245424 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:02.926085  245424 api_server.go:166] Checking apiserver status ...
	I0919 22:33:02.926117  245424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:02.938407  245424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:02.949019  245424 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:02.949116  245424 ssh_runner.go:195] Run: ls
	I0919 22:33:02.953205  245424 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:02.957553  245424 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:02.957576  245424 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:02.957585  245424 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:02.957604  245424 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:02.957923  245424 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:02.977282  245424 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:02.977311  245424 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:02.977682  245424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:02.996259  245424 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:02.996626  245424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:02.996682  245424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:03.016454  245424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:03.110947  245424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:03.126114  245424 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:03.126146  245424 api_server.go:166] Checking apiserver status ...
	I0919 22:33:03.126205  245424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:03.138094  245424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:03.147901  245424 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:03.147955  245424 ssh_runner.go:195] Run: ls
	I0919 22:33:03.151462  245424 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:03.157828  245424 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:03.157853  245424 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:03.157862  245424 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:03.157882  245424 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:03.158136  245424 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:03.176442  245424 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:03.176462  245424 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:03.176468  245424 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:03.182126  146335 retry.go:31] will retry after 1.221231438s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (708.112654ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:04.446577  245893 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:04.446686  245893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:04.446696  245893 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:04.446700  245893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:04.446938  245893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:04.447129  245893 out.go:368] Setting JSON to false
	I0919 22:33:04.447151  245893 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:04.447216  245893 notify.go:220] Checking for updates...
	I0919 22:33:04.447590  245893 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:04.447621  245893 status.go:174] checking status of ha-434755 ...
	I0919 22:33:04.448156  245893 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:04.468744  245893 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:04.468774  245893 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:04.469111  245893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:04.488403  245893 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:04.488739  245893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:04.488807  245893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:04.505345  245893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:04.597937  245893 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:04.602633  245893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:04.615075  245893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:04.669747  245893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:04.659029624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:04.670264  245893 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:04.670297  245893 api_server.go:166] Checking apiserver status ...
	I0919 22:33:04.670330  245893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:04.682890  245893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:04.692233  245893 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:04.692284  245893 ssh_runner.go:195] Run: ls
	I0919 22:33:04.695646  245893 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:04.701216  245893 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:04.701241  245893 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:04.701255  245893 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:04.701285  245893 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:04.701579  245893 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:04.718721  245893 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:04.718742  245893 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:04.719020  245893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:04.737239  245893 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:04.737601  245893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:04.737653  245893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:04.754309  245893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:04.846851  245893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:04.860213  245893 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:04.860246  245893 api_server.go:166] Checking apiserver status ...
	I0919 22:33:04.860284  245893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:04.873899  245893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:04.886212  245893 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:04.886266  245893 ssh_runner.go:195] Run: ls
	I0919 22:33:04.890080  245893 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:04.895374  245893 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:04.895399  245893 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:04.895411  245893 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:04.895439  245893 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:04.895778  245893 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:04.919830  245893 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:04.919859  245893 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:04.920173  245893 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:04.938128  245893 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:04.938432  245893 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:04.938511  245893 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:04.954913  245893 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:05.047804  245893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:05.059733  245893 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:05.059762  245893 api_server.go:166] Checking apiserver status ...
	I0919 22:33:05.059794  245893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:05.070979  245893 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:05.081459  245893 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:05.081540  245893 ssh_runner.go:195] Run: ls
	I0919 22:33:05.085439  245893 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:05.089450  245893 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:05.089472  245893 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:05.089484  245893 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:05.089541  245893 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:05.089769  245893 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:05.106828  245893 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:05.106846  245893 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:05.106851  245893 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:05.112198  146335 retry.go:31] will retry after 1.788247018s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (710.158969ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:06.946252  246156 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:06.946382  246156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:06.946393  246156 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:06.946399  246156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:06.946623  246156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:06.946798  246156 out.go:368] Setting JSON to false
	I0919 22:33:06.946819  246156 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:06.946882  246156 notify.go:220] Checking for updates...
	I0919 22:33:06.947172  246156 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:06.947197  246156 status.go:174] checking status of ha-434755 ...
	I0919 22:33:06.947588  246156 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:06.966649  246156 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:06.966697  246156 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:06.967003  246156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:06.984156  246156 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:06.984420  246156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:06.984470  246156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:07.001313  246156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:07.095418  246156 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:07.100347  246156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:07.112904  246156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:07.170235  246156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:07.159861498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:07.171000  246156 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:07.171045  246156 api_server.go:166] Checking apiserver status ...
	I0919 22:33:07.171098  246156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:07.184100  246156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:07.194597  246156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:07.194640  246156 ssh_runner.go:195] Run: ls
	I0919 22:33:07.198561  246156 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:07.202704  246156 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:07.202730  246156 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:07.202743  246156 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:07.202762  246156 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:07.203129  246156 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:07.220645  246156 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:07.220675  246156 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:07.220953  246156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:07.238046  246156 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:07.238413  246156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:07.238463  246156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:07.256260  246156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:07.350310  246156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:07.363830  246156 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:07.363858  246156 api_server.go:166] Checking apiserver status ...
	I0919 22:33:07.363897  246156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:07.376566  246156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:07.386898  246156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:07.386944  246156 ssh_runner.go:195] Run: ls
	I0919 22:33:07.390638  246156 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:07.394791  246156 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:07.394819  246156 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:07.394828  246156 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:07.394849  246156 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:07.395156  246156 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:07.413271  246156 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:07.413297  246156 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:07.413646  246156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:07.431145  246156 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:07.431448  246156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:07.431530  246156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:07.448866  246156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:07.542873  246156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:07.555589  246156 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:07.555621  246156 api_server.go:166] Checking apiserver status ...
	I0919 22:33:07.555670  246156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:07.568329  246156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:07.578729  246156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:07.578793  246156 ssh_runner.go:195] Run: ls
	I0919 22:33:07.582892  246156 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:07.587584  246156 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:07.587608  246156 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:07.587618  246156 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:07.587637  246156 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:07.587893  246156 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:07.605934  246156 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:07.605960  246156 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:07.605969  246156 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:07.612333  146335 retry.go:31] will retry after 1.832812133s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (692.649647ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:09.489443  246376 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:09.489561  246376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:09.489570  246376 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:09.489574  246376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:09.489784  246376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:09.489962  246376 out.go:368] Setting JSON to false
	I0919 22:33:09.489981  246376 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:09.490108  246376 notify.go:220] Checking for updates...
	I0919 22:33:09.490368  246376 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:09.490396  246376 status.go:174] checking status of ha-434755 ...
	I0919 22:33:09.490881  246376 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:09.510403  246376 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:09.510458  246376 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:09.510726  246376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:09.527665  246376 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:09.527880  246376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:09.527917  246376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:09.543354  246376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:09.636131  246376 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:09.640577  246376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:09.652213  246376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:09.709832  246376 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:09.700049135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:09.710615  246376 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:09.710651  246376 api_server.go:166] Checking apiserver status ...
	I0919 22:33:09.710690  246376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:09.723679  246376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:09.733326  246376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:09.733366  246376 ssh_runner.go:195] Run: ls
	I0919 22:33:09.736858  246376 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:09.741173  246376 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:09.741204  246376 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:09.741218  246376 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:09.741246  246376 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:09.741610  246376 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:09.758812  246376 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:09.758832  246376 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:09.759071  246376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:09.776304  246376 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:09.776630  246376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:09.776678  246376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:09.793560  246376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:09.887082  246376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:09.899010  246376 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:09.899041  246376 api_server.go:166] Checking apiserver status ...
	I0919 22:33:09.899088  246376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:09.910222  246376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:09.919718  246376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:09.919757  246376 ssh_runner.go:195] Run: ls
	I0919 22:33:09.923054  246376 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:09.927270  246376 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:09.927295  246376 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:09.927307  246376 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:09.927336  246376 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:09.927675  246376 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:09.946165  246376 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:09.946185  246376 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:09.946428  246376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:09.962854  246376 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:09.963181  246376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:09.963232  246376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:09.980043  246376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:10.072628  246376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:10.084731  246376 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:10.084761  246376 api_server.go:166] Checking apiserver status ...
	I0919 22:33:10.084804  246376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:10.096144  246376 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:10.105606  246376 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:10.105652  246376 ssh_runner.go:195] Run: ls
	I0919 22:33:10.109030  246376 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:10.113596  246376 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:10.113615  246376 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:10.113623  246376 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:10.113638  246376 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:10.113859  246376 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:10.132507  246376 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:10.132531  246376 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:10.132539  246376 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:10.138806  146335 retry.go:31] will retry after 3.569199958s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (694.959146ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:13.752048  246679 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:13.752347  246679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:13.752358  246679 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:13.752362  246679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:13.752700  246679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:13.752948  246679 out.go:368] Setting JSON to false
	I0919 22:33:13.752974  246679 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:13.753101  246679 notify.go:220] Checking for updates...
	I0919 22:33:13.753487  246679 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:13.753557  246679 status.go:174] checking status of ha-434755 ...
	I0919 22:33:13.754010  246679 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:13.773266  246679 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:13.773311  246679 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:13.773755  246679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:13.790405  246679 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:13.790670  246679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:13.790713  246679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:13.807733  246679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:13.900933  246679 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:13.905175  246679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:13.919371  246679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:13.975717  246679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:13.964972143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:13.976292  246679 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:13.976329  246679 api_server.go:166] Checking apiserver status ...
	I0919 22:33:13.976372  246679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:13.988788  246679 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:13.998672  246679 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:13.998730  246679 ssh_runner.go:195] Run: ls
	I0919 22:33:14.002596  246679 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:14.008470  246679 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:14.008492  246679 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:14.008526  246679 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:14.008558  246679 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:14.008807  246679 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:14.026448  246679 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:14.026470  246679 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:14.026742  246679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:14.043261  246679 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:14.043569  246679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:14.043628  246679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:14.059664  246679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:14.152721  246679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:14.165957  246679 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:14.165983  246679 api_server.go:166] Checking apiserver status ...
	I0919 22:33:14.166016  246679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:14.177361  246679 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:14.186976  246679 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:14.187019  246679 ssh_runner.go:195] Run: ls
	I0919 22:33:14.190599  246679 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:14.195001  246679 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:14.195025  246679 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:14.195036  246679 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:14.195055  246679 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:14.195340  246679 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:14.213813  246679 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:14.213838  246679 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:14.214131  246679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:14.230711  246679 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:14.230935  246679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:14.230972  246679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:14.247655  246679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:14.340949  246679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:14.352763  246679 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:14.352790  246679 api_server.go:166] Checking apiserver status ...
	I0919 22:33:14.352822  246679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:14.364181  246679 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:14.373678  246679 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:14.373729  246679 ssh_runner.go:195] Run: ls
	I0919 22:33:14.377113  246679 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:14.381304  246679 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:14.381324  246679 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:14.381333  246679 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:14.381347  246679 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:14.381638  246679 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:14.398750  246679 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:14.398768  246679 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:14.398776  246679 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:14.404805  146335 retry.go:31] will retry after 4.751125938s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (684.926527ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:19.202094  246935 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:19.202238  246935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.202252  246935 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:19.202258  246935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.202479  246935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:19.202669  246935 out.go:368] Setting JSON to false
	I0919 22:33:19.202699  246935 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:19.203017  246935 notify.go:220] Checking for updates...
	I0919 22:33:19.203204  246935 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:19.203277  246935 status.go:174] checking status of ha-434755 ...
	I0919 22:33:19.203993  246935 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:19.223891  246935 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:19.223924  246935 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:19.224203  246935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:19.240593  246935 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:19.240842  246935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:19.240879  246935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:19.257181  246935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:19.349630  246935 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:19.353962  246935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:19.365475  246935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:19.418663  246935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:19.408810858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:19.419409  246935 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:19.419456  246935 api_server.go:166] Checking apiserver status ...
	I0919 22:33:19.419517  246935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:19.431934  246935 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:19.441333  246935 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:19.441373  246935 ssh_runner.go:195] Run: ls
	I0919 22:33:19.444887  246935 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:19.450815  246935 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:19.450845  246935 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:19.450858  246935 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:19.450880  246935 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:19.451196  246935 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:19.468296  246935 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:19.468319  246935 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:19.468601  246935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:19.484762  246935 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:19.485012  246935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:19.485055  246935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:19.501313  246935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:19.593416  246935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:19.605300  246935 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:19.605326  246935 api_server.go:166] Checking apiserver status ...
	I0919 22:33:19.605370  246935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:19.617483  246935 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:19.627210  246935 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:19.627259  246935 ssh_runner.go:195] Run: ls
	I0919 22:33:19.630709  246935 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:19.634667  246935 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:19.634692  246935 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:19.634703  246935 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:19.634723  246935 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:19.635017  246935 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:19.652470  246935 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:19.652488  246935 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:19.652792  246935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:19.670294  246935 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:19.670568  246935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:19.670618  246935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:19.686407  246935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:19.778689  246935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:19.790613  246935 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:19.790643  246935 api_server.go:166] Checking apiserver status ...
	I0919 22:33:19.790681  246935 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:19.801697  246935 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:19.811090  246935 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:19.811127  246935 ssh_runner.go:195] Run: ls
	I0919 22:33:19.814359  246935 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:19.818463  246935 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:19.818483  246935 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:19.818491  246935 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:19.818574  246935 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:19.818895  246935 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:19.836899  246935 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:19.836926  246935 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:19.836933  246935 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:19.843685  146335 retry.go:31] will retry after 6.206890385s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (709.132115ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:26.095688  247332 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:26.095816  247332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:26.095828  247332 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:26.095834  247332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:26.096101  247332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:26.096294  247332 out.go:368] Setting JSON to false
	I0919 22:33:26.096315  247332 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:26.096506  247332 notify.go:220] Checking for updates...
	I0919 22:33:26.096814  247332 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:26.096850  247332 status.go:174] checking status of ha-434755 ...
	I0919 22:33:26.097420  247332 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:26.116131  247332 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:26.116152  247332 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:26.116345  247332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:26.133769  247332 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:26.134001  247332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:26.134038  247332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:26.150217  247332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:26.243656  247332 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:26.248259  247332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:26.259560  247332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:26.316590  247332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:26.305662278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:26.317262  247332 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:26.317299  247332 api_server.go:166] Checking apiserver status ...
	I0919 22:33:26.317343  247332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:26.329563  247332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:26.339013  247332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:26.339088  247332 ssh_runner.go:195] Run: ls
	I0919 22:33:26.342540  247332 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:26.346759  247332 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:26.346780  247332 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:26.346790  247332 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:26.346806  247332 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:26.347029  247332 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:26.363605  247332 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:26.363630  247332 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:26.363869  247332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:26.380406  247332 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:26.380653  247332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:26.380694  247332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:26.398250  247332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:26.490984  247332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:26.513385  247332 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:26.513421  247332 api_server.go:166] Checking apiserver status ...
	I0919 22:33:26.513472  247332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:26.524981  247332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:26.534012  247332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:26.534065  247332 ssh_runner.go:195] Run: ls
	I0919 22:33:26.537440  247332 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:26.541551  247332 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:26.541570  247332 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:26.541579  247332 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:26.541600  247332 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:26.541823  247332 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:26.558930  247332 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:26.558952  247332 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:26.559198  247332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:26.575153  247332 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:26.575393  247332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:26.575428  247332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:26.592274  247332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:26.684460  247332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:26.696374  247332 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:26.696406  247332 api_server.go:166] Checking apiserver status ...
	I0919 22:33:26.696452  247332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:26.708984  247332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:26.719174  247332 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:26.719226  247332 ssh_runner.go:195] Run: ls
	I0919 22:33:26.723673  247332 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:26.729405  247332 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:26.729431  247332 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:26.729443  247332 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:26.729462  247332 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:26.729798  247332 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:26.753421  247332 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:26.753461  247332 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:26.753471  247332 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:26.761428  146335 retry.go:31] will retry after 6.880482884s: exit status 7
E0919 22:33:33.470641  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (718.223768ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:33.686706  247790 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:33.686964  247790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:33.686973  247790 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:33.686977  247790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:33.687142  247790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:33.687305  247790 out.go:368] Setting JSON to false
	I0919 22:33:33.687324  247790 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:33.687404  247790 notify.go:220] Checking for updates...
	I0919 22:33:33.687778  247790 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:33.687807  247790 status.go:174] checking status of ha-434755 ...
	I0919 22:33:33.688300  247790 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:33.707183  247790 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:33.707213  247790 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:33.707458  247790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:33.725056  247790 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:33.725460  247790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:33.725528  247790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:33.746668  247790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:33.843914  247790 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:33.849172  247790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:33.863323  247790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:33.923861  247790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:33.911529499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:33.924454  247790 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:33.924489  247790 api_server.go:166] Checking apiserver status ...
	I0919 22:33:33.924560  247790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:33.937278  247790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:33.946621  247790 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:33.946662  247790 ssh_runner.go:195] Run: ls
	I0919 22:33:33.950261  247790 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:33.954466  247790 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:33.954490  247790 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:33.954514  247790 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:33.954537  247790 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:33.954817  247790 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:33.972371  247790 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:33.972393  247790 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:33.972662  247790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:33.990636  247790 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:33.990956  247790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:33.990999  247790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:34.007642  247790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:34.100708  247790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:34.113539  247790 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:34.113570  247790 api_server.go:166] Checking apiserver status ...
	I0919 22:33:34.113611  247790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:34.125473  247790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:34.136083  247790 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:34.136122  247790 ssh_runner.go:195] Run: ls
	I0919 22:33:34.139461  247790 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:34.143551  247790 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:34.143571  247790 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:34.143580  247790 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:34.143595  247790 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:34.143857  247790 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:34.163322  247790 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:34.163347  247790 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:34.163670  247790 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:34.180823  247790 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:34.181154  247790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:34.181203  247790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:34.199265  247790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:34.292779  247790 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:34.304908  247790 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:34.304941  247790 api_server.go:166] Checking apiserver status ...
	I0919 22:33:34.304982  247790 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:34.316236  247790 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:34.326054  247790 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:34.326094  247790 ssh_runner.go:195] Run: ls
	I0919 22:33:34.329593  247790 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:34.333716  247790 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:34.333741  247790 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:34.333752  247790 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:34.333775  247790 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:34.334105  247790 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:34.355270  247790 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:34.355288  247790 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:34.355295  247790 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:33:34.361048  146335 retry.go:31] will retry after 17.415802629s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (710.837138ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:33:51.829793  250341 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:51.829932  250341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:51.829941  250341 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:51.829945  250341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:51.830191  250341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:33:51.830378  250341 out.go:368] Setting JSON to false
	I0919 22:33:51.830401  250341 mustload.go:65] Loading cluster: ha-434755
	I0919 22:33:51.830537  250341 notify.go:220] Checking for updates...
	I0919 22:33:51.830832  250341 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:33:51.830859  250341 status.go:174] checking status of ha-434755 ...
	I0919 22:33:51.831481  250341 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:33:51.852030  250341 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:33:51.852096  250341 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:51.852395  250341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:33:51.870808  250341 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:33:51.871151  250341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:51.871204  250341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:33:51.890658  250341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:33:51.986806  250341 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:51.991245  250341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:52.003520  250341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:52.056684  250341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:33:52.047030641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:52.057261  250341 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:52.057290  250341 api_server.go:166] Checking apiserver status ...
	I0919 22:33:52.057322  250341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:52.069686  250341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup
	W0919 22:33:52.079839  250341 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2300/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:52.079891  250341 ssh_runner.go:195] Run: ls
	I0919 22:33:52.083295  250341 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:52.088996  250341 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:52.089016  250341 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:33:52.089026  250341 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:52.089045  250341 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:33:52.089307  250341 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:33:52.107889  250341 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:33:52.107915  250341 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:52.108184  250341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:33:52.126930  250341 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:33:52.127266  250341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:52.127320  250341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:33:52.144889  250341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:33:52.238653  250341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:52.250876  250341 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:52.250908  250341 api_server.go:166] Checking apiserver status ...
	I0919 22:33:52.250960  250341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:52.262407  250341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup
	W0919 22:33:52.272274  250341 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4699/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:52.272318  250341 ssh_runner.go:195] Run: ls
	I0919 22:33:52.276287  250341 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:52.280895  250341 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:52.280918  250341 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:33:52.280929  250341 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:52.280948  250341 status.go:174] checking status of ha-434755-m03 ...
	I0919 22:33:52.281196  250341 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:33:52.298103  250341 status.go:371] ha-434755-m03 host status = "Running" (err=<nil>)
	I0919 22:33:52.298127  250341 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:52.298468  250341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:33:52.316989  250341 host.go:66] Checking if "ha-434755-m03" exists ...
	I0919 22:33:52.317226  250341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:52.317266  250341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:33:52.334553  250341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:33:52.427716  250341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:33:52.442022  250341 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:33:52.442051  250341 api_server.go:166] Checking apiserver status ...
	I0919 22:33:52.442085  250341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:33:52.454010  250341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	W0919 22:33:52.463136  250341 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:52.463183  250341 ssh_runner.go:195] Run: ls
	I0919 22:33:52.466424  250341 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:33:52.470488  250341 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:33:52.470548  250341 status.go:463] ha-434755-m03 apiserver status = Running (err=<nil>)
	I0919 22:33:52.470561  250341 status.go:176] ha-434755-m03 status: &{Name:ha-434755-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:33:52.470578  250341 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:33:52.470807  250341 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:33:52.488040  250341 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:33:52.488063  250341 status.go:384] host is not running, skipping remaining checks
	I0919 22:33:52.488070  250341 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.019162879s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m03_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ node    │ ha-434755 node start m02 --alsologtostderr -v 5                                                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         8 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         8 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              9 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         9 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     9 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         9 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         9 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         9 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         9 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m6s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m6s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m9s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m6s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m16s (x8 over 9m17s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m16s (x7 over 9m17s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m16s (x8 over 9m17s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m9s                   kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m9s                   kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m9s                   kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           8m38s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17s                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aa648a096284af38bb8dd80e5d5ddd1
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m36s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m36s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m36s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 8m23s              kube-proxy       
	  Normal  RegisteredNode           8m33s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           8m32s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           8m16s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m10s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m15s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m13s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  8m12s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  8m11s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  17s    node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:32:27.605856Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:37.986542Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:32:37.986590Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:32:37.991039Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a99fbed258953a7f","error":"failed to dial a99fbed258953a7f on stream Message (EOF)"}
	{"level":"warn","ts":"2025-09-19T22:32:38.129566Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:41.122917Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:41.122972Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:42.187259Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:45.124446Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:45.124539Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:49.126006Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:49.126083Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:53.127626Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:53.127679Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:57.128390Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:57.128458Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:33:01.129540Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:33:01.129608Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-19T22:33:01.289791Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"a99fbed258953a7f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:33:01.289920Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.289957Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.291087Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"a99fbed258953a7f","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:33:01.291122Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.305641Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.305908Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	
	
	==> kernel <==
	 22:33:53 up  1:16,  0 users,  load average: 1.49, 2.59, 21.59
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:33:03.791903       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:13.792591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:13.792636       1 main.go:301] handling current node
	I0919 22:33:13.792652       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:13.792657       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:13.792848       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:13.792863       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:23.792615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:23.792668       1 main.go:301] handling current node
	I0919 22:33:23.792690       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:23.792696       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:23.792927       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:23.792943       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:33.792578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:33.792613       1 main.go:301] handling current node
	I0919 22:33:33.792630       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:33.792635       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:33.792844       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:33.792856       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:43.793581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:43.793641       1 main.go:301] handling current node
	I0919 22:33:43.793662       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:43.793669       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:43.793876       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:43.793892       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	I0919 22:32:12.547033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:15.112848       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:32:21.329211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	W0919 22:32:51.329750       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:33:21.614897       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:40.905898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (88.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:309: expected profile "ha-434755" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-434755\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-434755\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",
\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-434755\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\"
:\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-d
river-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimi
zations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:24:25.464542616Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0bf828a3209b8c3d2ad3e733e50f6df1f50e409f342a092c4c814dd4568d0ec",
	            "SandboxKey": "/var/run/docker/netns/a0bf828a3209",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f7:72:52:e8:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "b635e0cc6dc79a8f2eb8d44fbb74681cf1e5b405f36f7c9fa0b8f88a40d54eb0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.010828056s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m03_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ node    │ ha-434755 node start m02 --alsologtostderr -v 5                                                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:21.076123  203160 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:21.076224  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076232  203160 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:21.076236  203160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:21.076432  203160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:21.076920  203160 out.go:368] Setting JSON to false
	I0919 22:24:21.077711  203160 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3997,"bootTime":1758316664,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:21.077805  203160 start.go:140] virtualization: kvm guest
	I0919 22:24:21.079564  203160 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:21.080690  203160 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:21.080699  203160 notify.go:220] Checking for updates...
	I0919 22:24:21.081753  203160 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:21.082865  203160 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:21.084034  203160 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:21.085082  203160 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:21.086101  203160 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:21.087230  203160 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:21.110266  203160 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:21.110338  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.164419  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.153482571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.164556  203160 docker.go:318] overlay module found
	I0919 22:24:21.166256  203160 out.go:179] * Using the docker driver based on user configuration
	I0919 22:24:21.167251  203160 start.go:304] selected driver: docker
	I0919 22:24:21.167262  203160 start.go:918] validating driver "docker" against <nil>
	I0919 22:24:21.167273  203160 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:21.167837  203160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:21.218732  203160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:24:21.209383411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:21.218890  203160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:24:21.219109  203160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:21.220600  203160 out.go:179] * Using Docker driver with root privileges
	I0919 22:24:21.221617  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:21.221686  203160 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:24:21.221699  203160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:24:21.221777  203160 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:21.222962  203160 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:24:21.223920  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:21.224932  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:21.225767  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.225807  203160 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:24:21.225817  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:21.225855  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:21.225956  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:21.225972  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:21.226288  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:21.226314  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json: {Name:mkebfaf58402ee5b29f1d566a094ba67c667bd07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:21.245058  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:21.245075  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:21.245090  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:21.245116  203160 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:21.245221  203160 start.go:364] duration metric: took 85.831µs to acquireMachinesLock for "ha-434755"
	I0919 22:24:21.245250  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:21.245320  203160 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:24:21.246894  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:21.247127  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:21.247160  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:21.247231  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:21.247268  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247320  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247397  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:21.247432  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:21.247449  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:21.247869  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:24:21.263071  203160 cli_runner.go:211] docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:24:21.263128  203160 network_create.go:284] running [docker network inspect ha-434755] to gather additional debugging logs...
	I0919 22:24:21.263150  203160 cli_runner.go:164] Run: docker network inspect ha-434755
	W0919 22:24:21.278228  203160 cli_runner.go:211] docker network inspect ha-434755 returned with exit code 1
	I0919 22:24:21.278257  203160 network_create.go:287] error running [docker network inspect ha-434755]: docker network inspect ha-434755: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-434755 not found
	I0919 22:24:21.278276  203160 network_create.go:289] output of [docker network inspect ha-434755]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-434755 not found
	
	** /stderr **
	I0919 22:24:21.278380  203160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:21.293889  203160 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a50f90}
	I0919 22:24:21.293945  203160 network_create.go:124] attempt to create docker network ha-434755 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:24:21.293988  203160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-434755 ha-434755
	I0919 22:24:21.346619  203160 network_create.go:108] docker network ha-434755 192.168.49.0/24 created
	I0919 22:24:21.346647  203160 kic.go:121] calculated static IP "192.168.49.2" for the "ha-434755" container
	I0919 22:24:21.346698  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:21.362122  203160 cli_runner.go:164] Run: docker volume create ha-434755 --label name.minikube.sigs.k8s.io=ha-434755 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:21.378481  203160 oci.go:103] Successfully created a docker volume ha-434755
	I0919 22:24:21.378568  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --entrypoint /usr/bin/test -v ha-434755:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:21.725934  203160 oci.go:107] Successfully prepared a docker volume ha-434755
	I0919 22:24:21.725988  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:21.726011  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:21.726083  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:25.368758  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.642631223s)
	I0919 22:24:25.368791  203160 kic.go:203] duration metric: took 3.642776622s to extract preloaded images to volume ...
	W0919 22:24:25.368885  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:25.368918  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:25.368955  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:25.420305  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755 --name ha-434755 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755 --network ha-434755 --ip 192.168.49.2 --volume ha-434755:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:25.661250  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Running}}
	I0919 22:24:25.679605  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:25.698105  203160 cli_runner.go:164] Run: docker exec ha-434755 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:25.750352  203160 oci.go:144] the created container "ha-434755" has a running status.
	I0919 22:24:25.750385  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa...
	I0919 22:24:26.145646  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:26.145696  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:26.169661  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.186378  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:26.186402  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:26.236428  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:26.253812  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:26.253917  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.271856  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.272111  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.272123  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:26.403852  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.403887  203160 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:24:26.403968  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.421146  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.421378  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.421391  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:24:26.565038  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:24:26.565121  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:26.582234  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:26.582443  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:26.582460  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:26.715045  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:26.715078  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:26.715105  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:26.715115  203160 provision.go:84] configureAuth start
	I0919 22:24:26.715165  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:26.732003  203160 provision.go:143] copyHostCerts
	I0919 22:24:26.732039  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732068  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:26.732077  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:26.732143  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:26.732228  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732246  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:26.732250  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:26.732275  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:26.732321  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732338  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:26.732344  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:26.732367  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:26.732417  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:24:27.341034  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:27.341097  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:27.341134  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.360598  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:27.455483  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:27.455564  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:27.480468  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:27.480525  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:24:27.503241  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:27.503287  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:27.525743  203160 provision.go:87] duration metric: took 810.613663ms to configureAuth
	I0919 22:24:27.525768  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:27.525921  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:27.525973  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.542866  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.543066  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.543078  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:27.675714  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:27.675740  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:27.675838  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:27.675893  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.693429  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.693693  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.693798  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:27.843188  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:27.843285  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:27.860458  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:27.860715  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:24:27.860742  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:28.937239  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:27.840752975 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:28.937277  203160 machine.go:96] duration metric: took 2.683443018s to provisionDockerMachine
	I0919 22:24:28.937292  203160 client.go:171] duration metric: took 7.690121191s to LocalClient.Create
	I0919 22:24:28.937318  203160 start.go:167] duration metric: took 7.690191518s to libmachine.API.Create "ha-434755"
	I0919 22:24:28.937332  203160 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:24:28.937346  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:28.937417  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:28.937468  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:28.955631  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.052278  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:29.055474  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:29.055519  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:29.055533  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:29.055541  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:29.055555  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:29.055607  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:29.055697  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:29.055708  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:29.055792  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:29.064211  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:29.088887  203160 start.go:296] duration metric: took 151.540336ms for postStartSetup
	I0919 22:24:29.089170  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.106927  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:29.107156  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:29.107207  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.123683  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.214129  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:29.218338  203160 start.go:128] duration metric: took 7.973004208s to createHost
	I0919 22:24:29.218360  203160 start.go:83] releasing machines lock for "ha-434755", held for 7.973124739s
	I0919 22:24:29.218412  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:24:29.236040  203160 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:29.236081  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.236126  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:29.236195  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:29.253449  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.253827  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:29.414344  203160 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:29.418771  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:29.423119  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:29.450494  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:29.450577  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:29.475768  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:29.475797  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.475832  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.475949  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.491395  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:29.501756  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:29.511013  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:29.511066  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:29.520269  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.529232  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:29.538263  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:29.547175  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:29.555699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:29.564644  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:29.573613  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:29.582664  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:29.590362  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:29.598040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:29.662901  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:29.737694  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:29.737750  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:29.737804  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:29.750261  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.761088  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:29.781368  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:29.792667  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:29.803679  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:29.819981  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:29.823528  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:29.833551  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:29.851373  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:29.919426  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:29.982907  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:29.983042  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:30.001192  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:30.012142  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:30.077304  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:30.841187  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:30.852558  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:30.863819  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:30.874629  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:30.936849  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:30.998282  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.059613  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:31.085894  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:31.097613  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.165516  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:31.237651  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:31.250126  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:31.250193  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:31.253768  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:31.253815  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:31.257175  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:31.291330  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:31.291400  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.316224  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:31.343571  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:31.343639  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:31.360312  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:31.364394  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.376325  203160 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:31.376429  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:31.376472  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.396685  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.396706  203160 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:24:31.396777  203160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:24:31.417311  203160 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 22:24:31.417334  203160 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:31.417348  203160 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:24:31.417454  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:31.417533  203160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:24:31.468906  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:31.468934  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:31.468949  203160 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:31.468980  203160 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:31.469131  203160 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:31.469170  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:31.469222  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:31.481888  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:31.481979  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:31.482024  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:31.490896  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:31.490954  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:24:31.499752  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:24:31.517642  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:31.535661  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:24:31.552926  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:24:31.572177  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:31.575892  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:31.587094  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:31.654039  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:31.678017  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:24:31.678046  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:31.678070  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.678228  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:31.678271  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:31.678281  203160 certs.go:256] generating profile certs ...
	I0919 22:24:31.678337  203160 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:31.678354  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt with IP's: []
	I0919 22:24:31.857665  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt ...
	I0919 22:24:31.857696  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt: {Name:mk7ec51226de11d757f14966ffd43a2037698787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857881  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key ...
	I0919 22:24:31.857892  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key: {Name:mkf584fffef919693714a07e5a88b44eca7219c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:31.857971  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8
	I0919 22:24:31.857986  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:24:32.133506  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 ...
	I0919 22:24:32.133540  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8: {Name:mkb81ce84ef58bc410b7449c932fc5a925016309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133711  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 ...
	I0919 22:24:32.133729  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8: {Name:mk079553ff6e398f68775f47e1ad8c0a1a64a140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.133803  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:32.133908  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.9c8d1cb8 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:32.133973  203160 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:32.133989  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt with IP's: []
	I0919 22:24:32.385885  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt ...
	I0919 22:24:32.385919  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt: {Name:mk3bec5b301362978b2b3b81fd3c21d3f704e1cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386084  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key ...
	I0919 22:24:32.386097  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key: {Name:mk9670132fab0c6814f19a454e4e08b86e71aeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:32.386174  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:32.386207  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:32.386221  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:32.386234  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:32.386246  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:32.386271  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:32.386283  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:32.386292  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:32.386341  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:32.386378  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:32.386388  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:32.386418  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:32.386443  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:32.386467  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:32.386517  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:32.386548  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.386562  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.386574  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.387195  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:32.413179  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:32.437860  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:32.462719  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:32.488640  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:32.513281  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:24:32.536826  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:32.559540  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:32.582215  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:32.607378  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:32.629686  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:32.651946  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:32.668687  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:32.673943  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:32.683156  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686577  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.686633  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:32.693223  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:32.702177  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:32.711521  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714732  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.714766  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:32.721219  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:32.730116  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:32.739018  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742287  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.742330  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:32.748703  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:32.757370  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:32.760542  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:32.760590  203160 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:32.760710  203160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:24:32.778911  203160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:24:32.787673  203160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:24:32.796245  203160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:24:32.796280  203160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:24:32.804896  203160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:24:32.804909  203160 kubeadm.go:157] found existing configuration files:
	
	I0919 22:24:32.804937  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:24:32.813189  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:24:32.813229  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:24:32.821160  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:24:32.829194  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:24:32.829245  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:24:32.837031  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.845106  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:24:32.845150  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:24:32.853133  203160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:24:32.861349  203160 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:24:32.861390  203160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:24:32.869355  203160 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:24:32.905932  203160 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:24:32.906264  203160 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:24:32.922979  203160 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:24:32.923110  203160 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:24:32.923168  203160 kubeadm.go:310] OS: Linux
	I0919 22:24:32.923231  203160 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:24:32.923291  203160 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:24:32.923361  203160 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:24:32.923426  203160 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:24:32.923486  203160 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:24:32.923570  203160 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:24:32.923633  203160 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:24:32.923686  203160 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:24:32.975656  203160 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:24:32.975772  203160 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:24:32.975923  203160 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:24:32.987123  203160 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:24:32.990614  203160 out.go:252]   - Generating certificates and keys ...
	I0919 22:24:32.990701  203160 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:24:32.990790  203160 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:24:33.305563  203160 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:24:33.403579  203160 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:24:33.794985  203160 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:24:33.939882  203160 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:24:34.319905  203160 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:24:34.320050  203160 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.571803  203160 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:24:34.572036  203160 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-434755 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:24:34.785683  203160 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:24:34.913179  203160 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:24:35.193757  203160 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:24:35.193908  203160 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:24:35.269921  203160 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:24:35.432895  203160 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:24:35.889148  203160 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:24:36.099682  203160 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:24:36.370632  203160 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:24:36.371101  203160 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:24:36.373221  203160 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:24:36.375010  203160 out.go:252]   - Booting up control plane ...
	I0919 22:24:36.375112  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:24:36.375205  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:24:36.375823  203160 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:24:36.385552  203160 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:24:36.385660  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:24:36.391155  203160 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:24:36.391446  203160 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:24:36.391516  203160 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:24:36.469169  203160 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:24:36.469341  203160 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:24:37.470960  203160 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001771868s
	I0919 22:24:37.475271  203160 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:24:37.475402  203160 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:24:37.475560  203160 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:24:37.475683  203160 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:24:38.691996  203160 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.216651105s
	I0919 22:24:39.748252  203160 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.272903249s
	I0919 22:24:43.641652  203160 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.166322635s
	I0919 22:24:43.652285  203160 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:24:43.662136  203160 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:24:43.670817  203160 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:24:43.671109  203160 kubeadm.go:310] [mark-control-plane] Marking the node ha-434755 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:24:43.678157  203160 kubeadm.go:310] [bootstrap-token] Using token: g87idd.cyuzs8jougdixinx
	I0919 22:24:43.679741  203160 out.go:252]   - Configuring RBAC rules ...
	I0919 22:24:43.679886  203160 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:24:43.685914  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:24:43.691061  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:24:43.693550  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:24:43.697628  203160 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:24:43.699973  203160 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:24:44.047466  203160 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:24:44.461485  203160 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:24:45.047812  203160 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:24:45.048594  203160 kubeadm.go:310] 
	I0919 22:24:45.048685  203160 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:24:45.048725  203160 kubeadm.go:310] 
	I0919 22:24:45.048861  203160 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:24:45.048871  203160 kubeadm.go:310] 
	I0919 22:24:45.048906  203160 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:24:45.049005  203160 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:24:45.049058  203160 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:24:45.049064  203160 kubeadm.go:310] 
	I0919 22:24:45.049110  203160 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:24:45.049131  203160 kubeadm.go:310] 
	I0919 22:24:45.049219  203160 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:24:45.049232  203160 kubeadm.go:310] 
	I0919 22:24:45.049278  203160 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:24:45.049339  203160 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:24:45.049394  203160 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:24:45.049400  203160 kubeadm.go:310] 
	I0919 22:24:45.049474  203160 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:24:45.049614  203160 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:24:45.049627  203160 kubeadm.go:310] 
	I0919 22:24:45.049721  203160 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.049859  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 22:24:45.049895  203160 kubeadm.go:310] 	--control-plane 
	I0919 22:24:45.049904  203160 kubeadm.go:310] 
	I0919 22:24:45.050015  203160 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:24:45.050028  203160 kubeadm.go:310] 
	I0919 22:24:45.050110  203160 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g87idd.cyuzs8jougdixinx \
	I0919 22:24:45.050212  203160 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 22:24:45.053328  203160 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:24:45.053440  203160 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:24:45.053459  203160 cni.go:84] Creating CNI manager for ""
	I0919 22:24:45.053466  203160 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:24:45.054970  203160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:24:45.056059  203160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:24:45.060192  203160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:24:45.060207  203160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:24:45.078671  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:24:45.281468  203160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:24:45.281585  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.281587  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755 minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=true
	I0919 22:24:45.374035  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:45.378242  203160 ops.go:34] apiserver oom_adj: -16
	I0919 22:24:45.874252  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.375078  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.874791  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:24:46.939251  203160 kubeadm.go:1105] duration metric: took 1.657752945s to wait for elevateKubeSystemPrivileges
	I0919 22:24:46.939292  203160 kubeadm.go:394] duration metric: took 14.17870588s to StartCluster
	I0919 22:24:46.939313  203160 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.939381  203160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:46.940075  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:46.940315  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:24:46.940328  203160 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:46.940349  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:24:46.940375  203160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:24:46.940455  203160 addons.go:69] Setting storage-provisioner=true in profile "ha-434755"
	I0919 22:24:46.940480  203160 addons.go:69] Setting default-storageclass=true in profile "ha-434755"
	I0919 22:24:46.940526  203160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-434755"
	I0919 22:24:46.940484  203160 addons.go:238] Setting addon storage-provisioner=true in "ha-434755"
	I0919 22:24:46.940592  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:46.940622  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.940889  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.941141  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.961198  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:24:46.961822  203160 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:24:46.961843  203160 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:24:46.961849  203160 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:24:46.961854  203160 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:24:46.961858  203160 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:24:46.961927  203160 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:24:46.962245  203160 addons.go:238] Setting addon default-storageclass=true in "ha-434755"
	I0919 22:24:46.962289  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:46.962659  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:46.962840  203160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:24:46.964064  203160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:46.964085  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:24:46.964143  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.980987  203160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:46.981012  203160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:24:46.981083  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:46.985677  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:46.998945  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:47.020097  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:24:47.098011  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:24:47.110913  203160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:24:47.173952  203160 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:24:47.362290  203160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:24:47.363580  203160 addons.go:514] duration metric: took 423.211287ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:24:47.363630  203160 start.go:246] waiting for cluster config update ...
	I0919 22:24:47.363647  203160 start.go:255] writing updated cluster config ...
	I0919 22:24:47.364969  203160 out.go:203] 
	I0919 22:24:47.366064  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:47.366127  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.367471  203160 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:24:47.368387  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:24:47.369440  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:24:47.370378  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.370397  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:47.370461  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:24:47.370513  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:47.370529  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:24:47.370620  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:47.391559  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:24:47.391581  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:24:47.391603  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:24:47.391635  203160 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:47.391801  203160 start.go:364] duration metric: took 141.515µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:24:47.391835  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:47.391926  203160 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:24:47.393797  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:24:47.393909  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:24:47.393934  203160 client.go:168] LocalClient.Create starting
	I0919 22:24:47.393999  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:24:47.394037  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394072  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394137  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:24:47.394163  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:24:47.394178  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:24:47.394368  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:47.411751  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc0016fd680 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:24:47.411805  203160 kic.go:121] calculated static IP "192.168.49.3" for the "ha-434755-m02" container
	I0919 22:24:47.411877  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:24:47.428826  203160 cli_runner.go:164] Run: docker volume create ha-434755-m02 --label name.minikube.sigs.k8s.io=ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:24:47.446551  203160 oci.go:103] Successfully created a docker volume ha-434755-m02
	I0919 22:24:47.446629  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --entrypoint /usr/bin/test -v ha-434755-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:24:47.837811  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m02
	I0919 22:24:47.837861  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:24:47.837884  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:24:47.837943  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:24:51.165942  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.327954443s)
	I0919 22:24:51.165985  203160 kic.go:203] duration metric: took 3.328094858s to extract preloaded images to volume ...
	W0919 22:24:51.166081  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:24:51.166111  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:24:51.166151  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:24:51.222283  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m02 --name ha-434755-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m02 --network ha-434755 --ip 192.168.49.3 --volume ha-434755-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:24:51.469867  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Running}}
	I0919 22:24:51.487954  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.506846  203160 cli_runner.go:164] Run: docker exec ha-434755-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:24:51.559220  203160 oci.go:144] the created container "ha-434755-m02" has a running status.
	I0919 22:24:51.559254  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa...
	I0919 22:24:51.766973  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:24:51.767017  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:24:51.797620  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.823671  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:24:51.823693  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:24:51.878635  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:24:51.902762  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:51.902873  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:51.926268  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:51.926707  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:51.926729  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:52.076154  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.076188  203160 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:24:52.076259  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.099415  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.099841  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.099873  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:24:52.261548  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:24:52.261646  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.283406  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:52.283734  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:52.283754  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:52.428353  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:52.428390  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:24:52.428420  203160 ubuntu.go:190] setting up certificates
	I0919 22:24:52.428441  203160 provision.go:84] configureAuth start
	I0919 22:24:52.428536  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:52.450885  203160 provision.go:143] copyHostCerts
	I0919 22:24:52.450924  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.450961  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:24:52.450971  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:24:52.451027  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:24:52.451115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451140  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:24:52.451145  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:24:52.451185  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:24:52.451248  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451272  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:24:52.451276  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:24:52.451301  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:24:52.451355  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:24:52.822893  203160 provision.go:177] copyRemoteCerts
	I0919 22:24:52.822975  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:52.823015  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:52.844478  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:52.949460  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:24:52.949550  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:24:52.985521  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:24:52.985590  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:24:53.015276  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:24:53.015359  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:53.043799  203160 provision.go:87] duration metric: took 615.336421ms to configureAuth
	I0919 22:24:53.043834  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:24:53.044042  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:53.044098  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.065294  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.065671  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.065691  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:24:53.203158  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:24:53.203193  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:24:53.203308  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:24:53.203367  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.220915  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.221235  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.221346  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:24:53.374632  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:24:53.374713  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:53.392460  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:53.392706  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:24:53.392731  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:24:54.550785  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:24:53.372388319 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:24:54.550828  203160 machine.go:96] duration metric: took 2.648042096s to provisionDockerMachine
	I0919 22:24:54.550847  203160 client.go:171] duration metric: took 7.156901293s to LocalClient.Create
	I0919 22:24:54.550877  203160 start.go:167] duration metric: took 7.156965929s to libmachine.API.Create "ha-434755"
	I0919 22:24:54.550892  203160 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:24:54.550905  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:54.550979  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:54.551047  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.573731  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.676450  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:54.680626  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:24:54.680660  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:24:54.680669  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:24:54.680678  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:24:54.680695  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:24:54.680757  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:24:54.680849  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:24:54.680863  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:24:54.680970  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:24:54.691341  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:54.722119  203160 start.go:296] duration metric: took 171.208879ms for postStartSetup
	I0919 22:24:54.722583  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.743611  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:24:54.743848  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:24:54.743887  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.765985  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.864692  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:24:54.870738  203160 start.go:128] duration metric: took 7.478790821s to createHost
	I0919 22:24:54.870767  203160 start.go:83] releasing machines lock for "ha-434755-m02", held for 7.478950053s
	I0919 22:24:54.870847  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:24:54.898999  203160 out.go:179] * Found network options:
	I0919 22:24:54.900212  203160 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:24:54.901275  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:24:54.901331  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:24:54.901436  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:24:54.901515  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.901712  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:54.901788  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:24:54.923297  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:54.924737  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:24:55.020889  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:24:55.117431  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:24:55.117543  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:55.154058  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:24:55.154092  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.154128  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.154249  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.171125  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:24:55.182699  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:24:55.193910  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:24:55.193981  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:24:55.206930  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.218445  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:24:55.229676  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:24:55.239797  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:55.249561  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:24:55.261388  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:24:55.272063  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:24:55.285133  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:55.294764  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:55.304309  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.385891  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:24:55.483649  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:24:55.483704  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:24:55.483771  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:24:55.498112  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.511999  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:55.531010  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:55.547951  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:24:55.562055  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:55.582950  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:24:55.588111  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:24:55.600129  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:24:55.622263  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:24:55.715078  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:24:55.798019  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:24:55.798075  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:24:55.821473  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:24:55.835550  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:55.921379  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:24:56.663040  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:56.676296  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:24:56.691640  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:56.705621  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:24:56.790623  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:24:56.868190  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:56.965154  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:24:56.986139  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:24:56.999297  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:57.084263  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:24:57.171144  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:24:57.185630  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:24:57.185700  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:24:57.190173  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:24:57.190233  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:24:57.194000  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:57.238791  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:24:57.238870  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.271275  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:24:57.304909  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:24:57.306146  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:24:57.307257  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:24:57.328319  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:57.333877  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:57.348827  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:24:57.349095  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:57.349417  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:24:57.372031  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:57.372263  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:24:57.372273  203160 certs.go:194] generating shared ca certs ...
	I0919 22:24:57.372289  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.372399  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:24:57.372434  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:24:57.372443  203160 certs.go:256] generating profile certs ...
	I0919 22:24:57.372523  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:24:57.372551  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:24:57.372569  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:24:57.438372  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 ...
	I0919 22:24:57.438407  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57: {Name:mk30b073ffbf49812fc1c5fc78a448cc1824100f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438643  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 ...
	I0919 22:24:57.438666  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57: {Name:mk59c79ca511caeebb332978950944f46d4ce354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:57.438796  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:24:57.438979  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:24:57.439158  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:24:57.439184  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:24:57.439202  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:24:57.439220  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:24:57.439238  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:24:57.439256  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:24:57.439273  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:24:57.439294  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:24:57.439312  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:24:57.439376  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:24:57.439458  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:57.439474  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:57.439537  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:24:57.439573  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:57.439608  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:24:57.439670  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:24:57.439716  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:24:57.439743  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:57.439759  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:24:57.439830  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:57.462047  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:57.557856  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:24:57.562525  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:24:57.578095  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:24:57.582466  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:24:57.599559  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:24:57.603627  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:24:57.618994  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:24:57.622912  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:24:57.638660  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:24:57.643248  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:24:57.660006  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:24:57.664313  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:24:57.680744  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:57.714036  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:24:57.747544  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:57.780943  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:57.812353  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:24:57.845693  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:57.878130  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:57.911308  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:24:57.946218  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:24:57.984297  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:58.017177  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:24:58.049420  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:24:58.073963  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:24:58.097887  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:24:58.122255  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:24:58.147967  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:24:58.171849  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:24:58.195690  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:24:58.219698  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:24:58.227264  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:24:58.240247  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244702  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.244768  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:24:58.254189  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:58.265745  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:58.279180  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284030  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.284084  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:58.292591  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:58.305819  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:24:58.318945  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323696  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.323742  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:24:58.333578  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:58.346835  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:58.351013  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:24:58.351074  203160 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:24:58.351194  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:58.351227  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:24:58.351267  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:24:58.367957  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:24:58.368034  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:24:58.368096  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:58.379862  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:58.379941  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:24:58.392276  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:24:58.417444  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:58.442669  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:24:58.468697  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:58.473305  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:24:58.487646  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:58.578606  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:58.608451  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:24:58.608749  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:58.608859  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:24:58.608912  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:24:58.632792  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:24:58.802805  203160 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:24:58.802874  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:25:17.080561  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token b4953v.b0t4y42p8a3t0277 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (18.277615829s)
	I0919 22:25:17.080625  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:17.341701  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m02 minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:17.424260  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:17.499697  203160 start.go:319] duration metric: took 18.890943143s to joinCluster
	I0919 22:25:17.499790  203160 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.500059  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.501017  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:17.502040  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:17.615768  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:17.630185  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:17.630259  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:17.630522  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639687  203160 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:25:17.639715  203160 node_ready.go:38] duration metric: took 9.169272ms for node "ha-434755-m02" to be "Ready" ...
	I0919 22:25:17.639733  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:17.639783  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:17.654193  203160 api_server.go:72] duration metric: took 154.362028ms to wait for apiserver process to appear ...
	I0919 22:25:17.654221  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:17.654246  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:17.658704  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:17.659870  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:17.659894  203160 api_server.go:131] duration metric: took 5.665643ms to wait for apiserver health ...
	I0919 22:25:17.659902  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:17.664793  203160 system_pods.go:59] 18 kube-system pods found
	I0919 22:25:17.664839  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.664851  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.664856  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.664862  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.664875  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.664883  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.664891  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.664903  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.664909  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.664921  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.664931  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664938  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.664946  203160 system_pods.go:61] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.664954  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.664962  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.664969  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.664975  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.664981  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.664991  203160 system_pods.go:74] duration metric: took 5.081378ms to wait for pod list to return data ...
	I0919 22:25:17.665004  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:17.668317  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:17.668340  203160 default_sa.go:55] duration metric: took 3.328321ms for default service account to be created ...
	I0919 22:25:17.668351  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:17.673137  203160 system_pods.go:86] 18 kube-system pods found
	I0919 22:25:17.673173  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:17.673190  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:17.673196  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:17.673202  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:17.673216  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:17.673225  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:17.673232  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:17.673239  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:17.673245  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:17.673253  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:17.673261  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673269  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:17.673277  203160 system_pods.go:89] "kube-proxy-tzxjp" [68f449c9-12dc-40e2-9d22-a0c067962cb9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:17.673285  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:17.673306  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:17.673316  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:17.673321  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:17.673325  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:17.673334  203160 system_pods.go:126] duration metric: took 4.976103ms to wait for k8s-apps to be running ...
	I0919 22:25:17.673343  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:17.673397  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:17.689275  203160 system_svc.go:56] duration metric: took 15.922768ms WaitForService to wait for kubelet
	I0919 22:25:17.689301  203160 kubeadm.go:578] duration metric: took 189.477657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:17.689322  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:17.693097  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693135  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693151  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:17.693156  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:17.693162  203160 node_conditions.go:105] duration metric: took 3.833677ms to run NodePressure ...
	I0919 22:25:17.693179  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:17.693211  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:17.695103  203160 out.go:203] 
	I0919 22:25:17.698818  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:17.698972  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.700470  203160 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:25:17.701508  203160 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:25:17.702525  203160 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:25:17.703600  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:17.703627  203160 cache.go:58] Caching tarball of preloaded images
	I0919 22:25:17.703660  203160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:25:17.703750  203160 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:25:17.703762  203160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:25:17.703897  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:17.728614  203160 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:25:17.728640  203160 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:25:17.728661  203160 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:25:17.728696  203160 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:25:17.728819  203160 start.go:364] duration metric: took 98.455µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:25:17.728853  203160 start.go:93] Provisioning new machine with config: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:17.728991  203160 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:25:17.732545  203160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:25:17.732672  203160 start.go:159] libmachine.API.Create for "ha-434755" (driver="docker")
	I0919 22:25:17.732707  203160 client.go:168] LocalClient.Create starting
	I0919 22:25:17.732782  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 22:25:17.732823  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732845  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.732912  203160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 22:25:17.732939  203160 main.go:141] libmachine: Decoding PEM data...
	I0919 22:25:17.732958  203160 main.go:141] libmachine: Parsing certificate...
	I0919 22:25:17.733232  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:17.751632  203160 network_create.go:77] Found existing network {name:ha-434755 subnet:0xc00219e2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:25:17.751674  203160 kic.go:121] calculated static IP "192.168.49.4" for the "ha-434755-m03" container
	I0919 22:25:17.751747  203160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:25:17.770069  203160 cli_runner.go:164] Run: docker volume create ha-434755-m03 --label name.minikube.sigs.k8s.io=ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:25:17.789823  203160 oci.go:103] Successfully created a docker volume ha-434755-m03
	I0919 22:25:17.789902  203160 cli_runner.go:164] Run: docker run --rm --name ha-434755-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --entrypoint /usr/bin/test -v ha-434755-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:25:18.164388  203160 oci.go:107] Successfully prepared a docker volume ha-434755-m03
	I0919 22:25:18.164435  203160 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:25:18.164462  203160 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:25:18.164543  203160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:25:21.103950  203160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-434755-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.939357533s)
	I0919 22:25:21.103986  203160 kic.go:203] duration metric: took 2.939518923s to extract preloaded images to volume ...
	W0919 22:25:21.104096  203160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:25:21.104151  203160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:25:21.104202  203160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:25:21.177154  203160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-434755-m03 --name ha-434755-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-434755-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-434755-m03 --network ha-434755 --ip 192.168.49.4 --volume ha-434755-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:25:21.498634  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Running}}
	I0919 22:25:21.522257  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.545087  203160 cli_runner.go:164] Run: docker exec ha-434755-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:25:21.601217  203160 oci.go:144] the created container "ha-434755-m03" has a running status.
	I0919 22:25:21.601289  203160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa...
	I0919 22:25:21.834101  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:25:21.834162  203160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:25:21.931924  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:21.958463  203160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:25:21.958488  203160 kic_runner.go:114] Args: [docker exec --privileged ha-434755-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:25:22.013210  203160 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:25:22.034113  203160 machine.go:93] provisionDockerMachine start ...
	I0919 22:25:22.034216  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.055636  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.055967  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.055993  203160 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:25:22.197369  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.197398  203160 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:25:22.197459  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.216027  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.216285  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.216301  203160 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:25:22.368448  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:25:22.368549  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.386972  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:22.387278  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:22.387304  203160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:25:22.524292  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:25:22.524331  203160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:25:22.524354  203160 ubuntu.go:190] setting up certificates
	I0919 22:25:22.524368  203160 provision.go:84] configureAuth start
	I0919 22:25:22.524434  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:22.541928  203160 provision.go:143] copyHostCerts
	I0919 22:25:22.541971  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542000  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:25:22.542009  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:25:22.542076  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:25:22.542159  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542180  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:25:22.542186  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:25:22.542213  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:25:22.542310  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542334  203160 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:25:22.542337  203160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:25:22.542362  203160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:25:22.542414  203160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:25:22.877628  203160 provision.go:177] copyRemoteCerts
	I0919 22:25:22.877694  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:25:22.877741  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:22.896937  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:22.995146  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:25:22.995210  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:25:23.022236  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:25:23.022316  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:25:23.047563  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:25:23.047631  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:25:23.072319  203160 provision.go:87] duration metric: took 547.932448ms to configureAuth
	I0919 22:25:23.072353  203160 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:25:23.072625  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:23.072688  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.090959  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.091171  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.091183  203160 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:25:23.228223  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:25:23.228253  203160 ubuntu.go:71] root file system type: overlay
	I0919 22:25:23.228422  203160 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:25:23.228509  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.246883  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.247100  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.247170  203160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:25:23.398060  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:25:23.398137  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:23.415663  203160 main.go:141] libmachine: Using SSH client type: native
	I0919 22:25:23.415892  203160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:25:23.415918  203160 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:25:24.567023  203160 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:25:23.396311399 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:25:24.567060  203160 machine.go:96] duration metric: took 2.53292644s to provisionDockerMachine
	I0919 22:25:24.567072  203160 client.go:171] duration metric: took 6.83435882s to LocalClient.Create
	I0919 22:25:24.567092  203160 start.go:167] duration metric: took 6.834424553s to libmachine.API.Create "ha-434755"
	I0919 22:25:24.567099  203160 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:25:24.567108  203160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:25:24.567161  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:25:24.567201  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.584782  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.683573  203160 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:25:24.686859  203160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:25:24.686883  203160 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:25:24.686890  203160 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:25:24.686896  203160 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:25:24.686906  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:25:24.686958  203160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:25:24.687030  203160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:25:24.687040  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:25:24.687116  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:25:24.695639  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:24.721360  203160 start.go:296] duration metric: took 154.24817ms for postStartSetup
	I0919 22:25:24.721702  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.739596  203160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:25:24.739824  203160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:25:24.739863  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.756921  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.848110  203160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:25:24.852461  203160 start.go:128] duration metric: took 7.123445347s to createHost
	I0919 22:25:24.852485  203160 start.go:83] releasing machines lock for "ha-434755-m03", held for 7.123651539s
	I0919 22:25:24.852564  203160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:25:24.871364  203160 out.go:179] * Found network options:
	I0919 22:25:24.872460  203160 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:25:24.873469  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873491  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873531  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:25:24.873550  203160 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:25:24.873614  203160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:25:24.873651  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.873674  203160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:25:24.873726  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:25:24.891768  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:24.892067  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:25:25.055623  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:25:25.084377  203160 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:25:25.084463  203160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:25:25.110916  203160 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:25:25.110954  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.110987  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.111095  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.128062  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:25:25.138541  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:25:25.147920  203160 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:25:25.147980  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:25:25.158084  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.167726  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:25:25.177468  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:25:25.187066  203160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:25:25.196074  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:25:25.205874  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:25:25.215655  203160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:25:25.225542  203160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:25:25.233921  203160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:25:25.241915  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.307691  203160 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:25:25.379485  203160 start.go:495] detecting cgroup driver to use...
	I0919 22:25:25.379559  203160 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:25:25.379617  203160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:25:25.392037  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.402672  203160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:25:25.417255  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:25:25.428199  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:25:25.438890  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:25:25.454554  203160 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:25:25.457748  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:25:25.467191  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:25:25.484961  203160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:25:25.554190  203160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:25:25.619726  203160 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:25:25.619771  203160 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:25:25.638490  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:25:25.649394  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:25.718759  203160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:25:26.508414  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:25:26.521162  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:25:26.532748  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.543940  203160 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:25:26.612578  203160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:25:26.675793  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.742908  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:25:26.767410  203160 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:25:26.778129  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:26.843785  203160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:25:26.914025  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:25:26.926481  203160 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:25:26.926561  203160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:25:26.930135  203160 start.go:563] Will wait 60s for crictl version
	I0919 22:25:26.930190  203160 ssh_runner.go:195] Run: which crictl
	I0919 22:25:26.933448  203160 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:25:26.970116  203160 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:25:26.970186  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:26.995443  203160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:25:27.022587  203160 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:25:27.023535  203160 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:25:27.024458  203160 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:25:27.025398  203160 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:25:27.041313  203160 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:25:27.045217  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.056734  203160 mustload.go:65] Loading cluster: ha-434755
	I0919 22:25:27.056929  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:27.057119  203160 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:25:27.073694  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:27.073923  203160 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:25:27.073935  203160 certs.go:194] generating shared ca certs ...
	I0919 22:25:27.073947  203160 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.074070  203160 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:25:27.074110  203160 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:25:27.074119  203160 certs.go:256] generating profile certs ...
	I0919 22:25:27.074189  203160 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:25:27.074218  203160 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:25:27.074232  203160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:25:27.130384  203160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 ...
	I0919 22:25:27.130417  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6: {Name:mke05473b288d96ff0a35c82b85fde4c8e83b40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130606  203160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 ...
	I0919 22:25:27.130621  203160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6: {Name:mk192f98c5799773d19e5939501046d3123dfe7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:25:27.130715  203160 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:25:27.130866  203160 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:25:27.131029  203160 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:25:27.131044  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:25:27.131061  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:25:27.131075  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:25:27.131089  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:25:27.131102  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:25:27.131115  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:25:27.131128  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:25:27.131141  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:25:27.131198  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:25:27.131239  203160 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:25:27.131248  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:25:27.131275  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:25:27.131303  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:25:27.131331  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:25:27.131380  203160 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:25:27.131411  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.131428  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.131442  203160 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.131523  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:27.159068  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:27.248746  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:25:27.252715  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:25:27.267211  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:25:27.270851  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:25:27.283028  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:25:27.286477  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:25:27.298415  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:25:27.301783  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:25:27.314834  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:25:27.318008  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:25:27.330473  203160 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:25:27.333984  203160 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:25:27.345794  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:25:27.369657  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:25:27.393116  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:25:27.416244  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:25:27.439315  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:25:27.463476  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:25:27.486915  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:25:27.510165  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:25:27.534471  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:25:27.560237  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:25:27.583106  203160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:25:27.606007  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:25:27.623725  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:25:27.641200  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:25:27.658321  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:25:27.675317  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:25:27.692422  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:25:27.709455  203160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:25:27.727392  203160 ssh_runner.go:195] Run: openssl version
	I0919 22:25:27.732862  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:25:27.742299  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745678  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.745728  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:25:27.752398  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:25:27.761605  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:25:27.771021  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774382  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.774418  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:25:27.781109  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:25:27.790814  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:25:27.799904  203160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803130  203160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.803179  203160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:25:27.809808  203160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:25:27.819246  203160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:25:27.822627  203160 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:25:27.822680  203160 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:25:27.822775  203160 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:25:27.822800  203160 kube-vip.go:115] generating kube-vip config ...
	I0919 22:25:27.822828  203160 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:25:27.834857  203160 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:25:27.834926  203160 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:25:27.834980  203160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:25:27.843463  203160 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:25:27.843532  203160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:25:27.852030  203160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:25:27.869894  203160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:25:27.888537  203160 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:25:27.908135  203160 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:25:27.911776  203160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:25:27.923898  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:27.989986  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:28.015049  203160 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:25:28.015341  203160 start.go:317] joinCluster: &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:25:28.015488  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:25:28.015561  203160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:25:28.036185  203160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:25:28.179815  203160 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:28.179865  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:25:39.101433  203160 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ktda9v.620xzponyzx4q4u3 --discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-434755-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (10.921540133s)
	I0919 22:25:39.101473  203160 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:25:39.324555  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-434755-m03 minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-434755 minikube.k8s.io/primary=false
	I0919 22:25:39.399339  203160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-434755-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:25:39.475025  203160 start.go:319] duration metric: took 11.459681606s to joinCluster
	I0919 22:25:39.475121  203160 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:25:39.475445  203160 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:25:39.476384  203160 out.go:179] * Verifying Kubernetes components...
	I0919 22:25:39.477465  203160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:25:39.581053  203160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:25:39.594584  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:25:39.594654  203160 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:25:39.594885  203160 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	W0919 22:25:41.598871  203160 node_ready.go:57] node "ha-434755-m03" has "Ready":"False" status (will retry)
	I0919 22:25:43.601543  203160 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:25:43.601575  203160 node_ready.go:38] duration metric: took 4.006671921s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:25:43.601598  203160 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:25:43.601660  203160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:25:43.617376  203160 api_server.go:72] duration metric: took 4.142210029s to wait for apiserver process to appear ...
	I0919 22:25:43.617405  203160 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:25:43.617428  203160 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:25:43.622827  203160 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:25:43.624139  203160 api_server.go:141] control plane version: v1.34.0
	I0919 22:25:43.624164  203160 api_server.go:131] duration metric: took 6.751487ms to wait for apiserver health ...
	I0919 22:25:43.624175  203160 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:25:43.631480  203160 system_pods.go:59] 25 kube-system pods found
	I0919 22:25:43.631526  203160 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.631534  203160 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.631540  203160 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.631545  203160 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.631555  203160 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.631565  203160 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.631584  203160 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.631592  203160 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.631602  203160 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.631607  203160 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.631624  203160 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.631633  203160 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.631639  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.631652  203160 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.631660  203160 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.631668  203160 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631675  203160 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.631683  203160 system_pods.go:61] "kube-proxy-vwrdt" [e3337cd7-84eb-4ddd-921f-1ef42899cc96] Failed / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.631692  203160 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.631698  203160 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.631709  203160 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.631718  203160 system_pods.go:61] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.631724  203160 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.631732  203160 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.631737  203160 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.631747  203160 system_pods.go:74] duration metric: took 7.564894ms to wait for pod list to return data ...
	I0919 22:25:43.631760  203160 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:25:43.635188  203160 default_sa.go:45] found service account: "default"
	I0919 22:25:43.635210  203160 default_sa.go:55] duration metric: took 3.443504ms for default service account to be created ...
	I0919 22:25:43.635221  203160 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:25:43.640825  203160 system_pods.go:86] 24 kube-system pods found
	I0919 22:25:43.640849  203160 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:25:43.640854  203160 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:25:43.640858  203160 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:25:43.640861  203160 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:25:43.640867  203160 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:25:43.640872  203160 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:25:43.640877  203160 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:25:43.640883  203160 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:25:43.640889  203160 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:25:43.640893  203160 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:25:43.640901  203160 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:25:43.640907  203160 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:25:43.640913  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:25:43.640922  203160 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:25:43.640927  203160 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:25:43.640932  203160 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:25:43.640937  203160 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:25:43.640941  203160 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:25:43.640944  203160 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:25:43.640952  203160 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:25:43.640958  203160 system_pods.go:89] "kube-vip-ha-434755" [eb65f5df-597d-4d36-b4c4-e33b1c1a6b35] Running
	I0919 22:25:43.640966  203160 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:25:43.640971  203160 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:25:43.640974  203160 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:25:43.640981  203160 system_pods.go:126] duration metric: took 5.753999ms to wait for k8s-apps to be running ...
	I0919 22:25:43.640989  203160 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:25:43.641031  203160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:25:43.653532  203160 system_svc.go:56] duration metric: took 12.534189ms WaitForService to wait for kubelet
	I0919 22:25:43.653556  203160 kubeadm.go:578] duration metric: took 4.178399256s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:25:43.653573  203160 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:25:43.656435  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656455  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656467  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656470  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656475  203160 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:25:43.656479  203160 node_conditions.go:123] node cpu capacity is 8
	I0919 22:25:43.656484  203160 node_conditions.go:105] duration metric: took 2.906956ms to run NodePressure ...
	I0919 22:25:43.656557  203160 start.go:241] waiting for startup goroutines ...
	I0919 22:25:43.656587  203160 start.go:255] writing updated cluster config ...
	I0919 22:25:43.656893  203160 ssh_runner.go:195] Run: rm -f paused
	I0919 22:25:43.660610  203160 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:25:43.661067  203160 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:25:43.664242  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669047  203160 pod_ready.go:94] pod "coredns-66bc5c9577-4lmln" is "Ready"
	I0919 22:25:43.669069  203160 pod_ready.go:86] duration metric: took 4.804098ms for pod "coredns-66bc5c9577-4lmln" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.669076  203160 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.673294  203160 pod_ready.go:94] pod "coredns-66bc5c9577-w8trg" is "Ready"
	I0919 22:25:43.673313  203160 pod_ready.go:86] duration metric: took 4.232517ms for pod "coredns-66bc5c9577-w8trg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.676291  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681202  203160 pod_ready.go:94] pod "etcd-ha-434755" is "Ready"
	I0919 22:25:43.681224  203160 pod_ready.go:86] duration metric: took 4.891614ms for pod "etcd-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.681231  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685174  203160 pod_ready.go:94] pod "etcd-ha-434755-m02" is "Ready"
	I0919 22:25:43.685197  203160 pod_ready.go:86] duration metric: took 3.961188ms for pod "etcd-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.685203  203160 pod_ready.go:83] waiting for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:43.861561  203160 request.go:683] "Waited before sending request" delay="176.248264ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.062212  203160 request.go:683] "Waited before sending request" delay="197.34334ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.261544  203160 request.go:683] "Waited before sending request" delay="75.158894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-434755-m03"
	I0919 22:25:44.461584  203160 request.go:683] "Waited before sending request" delay="196.309622ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:44.861909  203160 request.go:683] "Waited before sending request" delay="172.267033ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:45.261844  203160 request.go:683] "Waited before sending request" delay="72.222149ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:45.690633  203160 pod_ready.go:104] pod "etcd-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:46.192067  203160 pod_ready.go:94] pod "etcd-ha-434755-m03" is "Ready"
	I0919 22:25:46.192098  203160 pod_ready.go:86] duration metric: took 2.50688828s for pod "etcd-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.262400  203160 request.go:683] "Waited before sending request" delay="70.17118ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:25:46.266643  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.462133  203160 request.go:683] "Waited before sending request" delay="195.353683ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755"
	I0919 22:25:46.661695  203160 request.go:683] "Waited before sending request" delay="196.23519ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:46.664990  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755" is "Ready"
	I0919 22:25:46.665013  203160 pod_ready.go:86] duration metric: took 398.342895ms for pod "kube-apiserver-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.665024  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:46.862485  203160 request.go:683] "Waited before sending request" delay="197.349925ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m02"
	I0919 22:25:47.062458  203160 request.go:683] "Waited before sending request" delay="196.27598ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:47.066027  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m02" is "Ready"
	I0919 22:25:47.066062  203160 pod_ready.go:86] duration metric: took 401.030788ms for pod "kube-apiserver-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.066074  203160 pod_ready.go:83] waiting for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:47.262536  203160 request.go:683] "Waited before sending request" delay="196.349445ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.461658  203160 request.go:683] "Waited before sending request" delay="196.15827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:47.662339  203160 request.go:683] "Waited before sending request" delay="95.242557ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-434755-m03"
	I0919 22:25:47.861611  203160 request.go:683] "Waited before sending request" delay="196.286818ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.262313  203160 request.go:683] "Waited before sending request" delay="192.342763ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:48.661859  203160 request.go:683] "Waited before sending request" delay="92.219172ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:49.071933  203160 pod_ready.go:104] pod "kube-apiserver-ha-434755-m03" is not "Ready", error: <nil>
	I0919 22:25:51.071739  203160 pod_ready.go:94] pod "kube-apiserver-ha-434755-m03" is "Ready"
	I0919 22:25:51.071767  203160 pod_ready.go:86] duration metric: took 4.005686408s for pod "kube-apiserver-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.074543  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.262152  203160 request.go:683] "Waited before sending request" delay="185.334685ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755"
	I0919 22:25:51.265630  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755" is "Ready"
	I0919 22:25:51.265657  203160 pod_ready.go:86] duration metric: took 191.092666ms for pod "kube-controller-manager-ha-434755" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.265666  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.462098  203160 request.go:683] "Waited before sending request" delay="196.345826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m02"
	I0919 22:25:51.661912  203160 request.go:683] "Waited before sending request" delay="196.187823ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:51.665191  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m02" is "Ready"
	I0919 22:25:51.665224  203160 pod_ready.go:86] duration metric: took 399.551288ms for pod "kube-controller-manager-ha-434755-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.665233  203160 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:51.861619  203160 request.go:683] "Waited before sending request" delay="196.276968ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-434755-m03"
	I0919 22:25:52.062202  203160 request.go:683] "Waited before sending request" delay="197.351779ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:52.065578  203160 pod_ready.go:94] pod "kube-controller-manager-ha-434755-m03" is "Ready"
	I0919 22:25:52.065604  203160 pod_ready.go:86] duration metric: took 400.365679ms for pod "kube-controller-manager-ha-434755-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.262003  203160 request.go:683] "Waited before sending request" delay="196.29708ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:25:52.265548  203160 pod_ready.go:83] waiting for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.462021  203160 request.go:683] "Waited before sending request" delay="196.352536ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cnsm"
	I0919 22:25:52.662519  203160 request.go:683] "Waited before sending request" delay="196.351016ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m02"
	I0919 22:25:52.665831  203160 pod_ready.go:94] pod "kube-proxy-4cnsm" is "Ready"
	I0919 22:25:52.665859  203160 pod_ready.go:86] duration metric: took 400.28275ms for pod "kube-proxy-4cnsm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.665868  203160 pod_ready.go:83] waiting for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:25:52.862291  203160 request.go:683] "Waited before sending request" delay="196.344667ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.061976  203160 request.go:683] "Waited before sending request" delay="196.35101ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.261911  203160 request.go:683] "Waited before sending request" delay="95.241357ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dzrbh"
	I0919 22:25:53.461590  203160 request.go:683] "Waited before sending request" delay="196.28491ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:53.862244  203160 request.go:683] "Waited before sending request" delay="192.346086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	I0919 22:25:54.261842  203160 request.go:683] "Waited before sending request" delay="92.230453ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-434755-m03"
	W0919 22:25:54.671717  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:56.671839  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:25:58.672473  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:01.172572  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:03.672671  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:06.172469  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:08.672353  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:11.172405  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:13.672314  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:16.172799  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:18.672196  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:20.672298  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:23.171528  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:25.172008  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:27.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:29.672449  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:31.672563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:33.672868  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:36.170989  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:38.171892  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:40.172022  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:42.172174  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:44.671993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:47.171063  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:49.172486  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:51.672732  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:54.172023  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:56.172144  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:26:58.671775  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:00.671992  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:03.171993  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:05.671723  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:08.171842  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:10.172121  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:12.672014  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:15.172390  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:17.172822  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:19.672126  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:21.673333  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:24.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:26.672310  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:29.171411  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:31.171872  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:33.172386  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:35.172451  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:37.672546  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:40.172235  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:42.172963  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:44.671777  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:46.671841  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:49.171918  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:51.172295  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:53.671812  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:55.672948  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:27:58.171734  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:00.172103  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:02.174861  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:04.672033  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:07.171816  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:09.671792  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:11.672609  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:14.171130  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:16.172329  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:18.672102  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:21.172674  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:23.173027  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:25.672026  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:28.171975  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:30.672302  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:32.672601  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:35.171532  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:37.171862  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:39.672084  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:42.172811  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:44.672206  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:46.672508  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:49.171457  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:51.172154  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:53.172276  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:55.672125  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:28:58.173041  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:00.672216  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:03.172384  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:05.673458  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:08.172666  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:10.672118  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:13.171914  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:15.172099  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:17.671977  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:20.172061  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:22.671971  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:24.672271  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:27.171769  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:29.172036  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:31.172563  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:33.672797  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:36.171859  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:38.671554  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:41.171621  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	W0919 22:29:43.172570  203160 pod_ready.go:104] pod "kube-proxy-dzrbh" is not "Ready", error: <nil>
	I0919 22:29:43.661688  203160 pod_ready.go:86] duration metric: took 3m50.995803943s for pod "kube-proxy-dzrbh" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 22:29:43.661752  203160 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-proxy" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 22:29:43.661771  203160 pod_ready.go:40] duration metric: took 4m0.001130626s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:29:43.663339  203160 out.go:203] 
	W0919 22:29:43.664381  203160 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 22:29:43.665560  203160 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:49 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:24:53 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:53Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 19 22:24:54 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:24:54Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.225956908Z" level=info msg="ignoring event" container=f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.226083882Z" level=info msg="ignoring event" container=fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287898199Z" level=info msg="ignoring event" container=b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 dockerd[1124]: time="2025-09-19T22:25:02.287938972Z" level=info msg="ignoring event" container=de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:02 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:02Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:03 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:03Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634903380Z" level=info msg="ignoring event" container=e66b377f63cd024c271469a44f4844c50e6d21b7cd4f5be0240558825f482966 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.634965117Z" level=info msg="ignoring event" container=e797401c93bc72db5f536dfa81292a1cbcf7a082f6aa091231b53030ca4c3fe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702221010Z" level=info msg="ignoring event" container=89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 dockerd[1124]: time="2025-09-19T22:25:15.702289485Z" level=info msg="ignoring event" container=bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:25:15 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:25:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:25:17 ha-434755 dockerd[1124]: time="2025-09-19T22:25:17.979227230Z" level=info msg="ignoring event" container=7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:29:46 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:29:48 ha-434755 cri-dockerd[1430]: time="2025-09-19T22:29:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago       Running             busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	37e3f52bd7982       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       1                   af5b94805e3a7       storage-provisioner
	276fb29221693       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	e797401c93bc7       52546a367cc9e                                                                                         8 minutes ago       Exited              coredns                   1                   bc57496cf8c97       coredns-66bc5c9577-4lmln
	e66b377f63cd0       52546a367cc9e                                                                                         8 minutes ago       Exited              coredns                   1                   89b975ea350c8       coredns-66bc5c9577-w8trg
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              9 minutes ago       Running             kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         9 minutes ago       Running             kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	7dcf79d61a67e       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       0                   af5b94805e3a7       storage-provisioner
	0fc6714ebb308       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     9 minutes ago       Running             kube-vip                  0                   fb11db0e55f38       kube-vip-ha-434755
	baeef3d333816       90550c43ad2bc                                                                                         9 minutes ago       Running             kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         9 minutes ago       Running             etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         9 minutes ago       Running             kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         9 minutes ago       Running             kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	
	
	==> coredns [e66b377f63cd] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40758 - 42383 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000156982s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:56884 - 59881 "HINFO IN 7596401662938690273.2510453177671440305. udp 57 false 512" - - 0 5.000107168s
	[ERROR] plugin/errors: 2 7596401662938690273.2510453177671440305. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [e797401c93bc] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43652 - 47211 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000171362s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:44505 - 54581 "HINFO IN 2104433587108610861.5063388797386552334. udp 57 false 512" - - 0 5.000102051s
	[ERROR] plugin/errors: 2 2104433587108610861.5063388797386552334. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:33:33 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b1fb77ef5024d9e96bd6c3ede9949e2
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m9s
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m9s
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m12s
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m9s
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m19s (x8 over 9m20s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m19s (x7 over 9m20s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m19s (x8 over 9m20s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m12s                  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s                  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s                  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           8m41s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           8m19s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           20s                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:32:29 +0000   Fri, 19 Sep 2025 22:25:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 7aa648a096284af38bb8dd80e5d5ddd1
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m39s
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m39s
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 8m25s              kube-proxy       
	  Normal  RegisteredNode           8m36s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           8m35s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           8m19s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:30:03 +0000   Fri, 19 Sep 2025 22:25:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 56ffdb437569490697f0dd38afc6a3b0
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m13s
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m18s
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m16s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  8m15s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  8m14s  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode  20s    node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:32:27.605856Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:37.986542Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:32:37.986590Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:32:37.991039Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"a99fbed258953a7f","error":"failed to dial a99fbed258953a7f on stream Message (EOF)"}
	{"level":"warn","ts":"2025-09-19T22:32:38.129566Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:41.122917Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:41.122972Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:42.187259Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:32:45.124446Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:45.124539Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:49.126006Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:49.126083Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:53.127626Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:53.127679Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:57.128390Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:32:57.128458Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:33:01.129540Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:33:01.129608Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"a99fbed258953a7f","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-19T22:33:01.289791Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"a99fbed258953a7f","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:33:01.289920Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.289957Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.291087Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"a99fbed258953a7f","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:33:01.291122Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.305641Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:33:01.305908Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	
	
	==> kernel <==
	 22:33:56 up  1:16,  0 users,  load average: 1.45, 2.57, 21.48
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:33:13.792863       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:23.792615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:23.792668       1 main.go:301] handling current node
	I0919 22:33:23.792690       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:23.792696       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:23.792927       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:23.792943       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:33.792578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:33.792613       1 main.go:301] handling current node
	I0919 22:33:33.792630       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:33.792635       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:33.792844       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:33.792856       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:43.793581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:43.793641       1 main.go:301] handling current node
	I0919 22:33:43.793662       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:43.793669       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:43.793876       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:43.793892       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:53.797667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:53.797706       1 main.go:301] handling current node
	I0919 22:33:53.797728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:53.797735       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:53.797927       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:53.797943       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	I0919 22:26:02.142559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:03.352353       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:21.770448       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:25.641963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.035829       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:43.682113       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:00.064129       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:04.274915       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:06.869013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:31:17.122601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40186: use of closed network connection
	E0919 22:31:17.356789       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40194: use of closed network connection
	E0919 22:31:17.528046       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:40206: use of closed network connection
	E0919 22:31:17.695940       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43172: use of closed network connection
	E0919 22:31:17.871592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43192: use of closed network connection
	E0919 22:31:18.051715       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43220: use of closed network connection
	E0919 22:31:18.221208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43246: use of closed network connection
	E0919 22:31:18.383983       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43274: use of closed network connection
	E0919 22:31:18.556302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43286: use of closed network connection
	E0919 22:31:20.673796       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:43360: use of closed network connection
	I0919 22:32:12.547033       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:15.112848       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:32:21.329211       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	W0919 22:32:51.329750       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:33:21.614897       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:33:40.905898       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.575330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:24:40.592760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:40.606110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:24:40.613300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:24:40.705675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:24:40.757341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	
	
	==> kubelet <==
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867528    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-lib-modules\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867560    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd2c97ac-215c-4657-a3af-bf74603285af-lib-modules\") pod \"kindnet-djvx4\" (UID: \"dd2c97ac-215c-4657-a3af-bf74603285af\") " pod="kube-system/kindnet-djvx4"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.867616    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg64\" (UniqueName: \"kubernetes.io/projected/9d9843d9-c2ca-4751-8af5-f8fc91cf07c9-kube-api-access-5mg64\") pod \"kube-proxy-gzpg8\" (UID: \"9d9843d9-c2ca-4751-8af5-f8fc91cf07c9\") " pod="kube-system/kube-proxy-gzpg8"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.967871    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54431fee-554c-4c3c-9c81-d779981d36db-config-volume\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:47 ha-434755 kubelet[2465]: I0919 22:24:47.968112    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2k\" (UniqueName: \"kubernetes.io/projected/54431fee-554c-4c3c-9c81-d779981d36db-kube-api-access-8tk2k\") pod \"coredns-66bc5c9577-w8trg\" (UID: \"54431fee-554c-4c3c-9c81-d779981d36db\") " pod="kube-system/coredns-66bc5c9577-w8trg"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069218    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f31e1cc-6bbb-4987-93c7-48e61288b609-config-volume\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.069281    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbd6\" (UniqueName: \"kubernetes.io/projected/0f31e1cc-6bbb-4987-93c7-48e61288b609-kube-api-access-xxbd6\") pod \"coredns-66bc5c9577-4lmln\" (UID: \"0f31e1cc-6bbb-4987-93c7-48e61288b609\") " pod="kube-system/coredns-66bc5c9577-4lmln"
	Sep 19 22:24:48 ha-434755 kubelet[2465]: I0919 22:24:48.597179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.59714647 podStartE2EDuration="1.59714647s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:48.596804879 +0000 UTC m=+4.412561769" watchObservedRunningTime="2025-09-19 22:24:48.59714647 +0000 UTC m=+4.412903362"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381213    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lmln" podStartSLOduration=2.381182844 podStartE2EDuration="2.381182844s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.369703818 +0000 UTC m=+5.185460747" watchObservedRunningTime="2025-09-19 22:24:49.381182844 +0000 UTC m=+5.196939736"
	Sep 19 22:24:49 ha-434755 kubelet[2465]: I0919 22:24:49.381451    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzpg8" podStartSLOduration=2.381444212 podStartE2EDuration="2.381444212s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.381368165 +0000 UTC m=+5.197125048" watchObservedRunningTime="2025-09-19 22:24:49.381444212 +0000 UTC m=+5.197201101"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.429938    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w8trg" podStartSLOduration=6.429916905 podStartE2EDuration="6.429916905s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:24:49.399922361 +0000 UTC m=+5.215679245" watchObservedRunningTime="2025-09-19 22:24:53.429916905 +0000 UTC m=+9.245673795"
	Sep 19 22:24:53 ha-434755 kubelet[2465]: I0919 22:24:53.430179    2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-djvx4" podStartSLOduration=2.5583203169999997 podStartE2EDuration="6.430170951s" podCreationTimestamp="2025-09-19 22:24:47 +0000 UTC" firstStartedPulling="2025-09-19 22:24:49.225935906 +0000 UTC m=+5.041692778" lastFinishedPulling="2025-09-19 22:24:53.097786536 +0000 UTC m=+8.913543412" observedRunningTime="2025-09-19 22:24:53.429847961 +0000 UTC m=+9.245604852" watchObservedRunningTime="2025-09-19 22:24:53.430170951 +0000 UTC m=+9.245927840"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.488942    2465 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 19 22:24:54 ha-434755 kubelet[2465]: I0919 22:24:54.490039    2465 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.592732    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de54ed5bb258a7d8937149fcb9be16e03e34cd6b8786d874a980e9f9ec26d429"
	Sep 19 22:25:02 ha-434755 kubelet[2465]: I0919 22:25:02.617104    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b987cc756018033717c69e468416998c2b07c3a7a6aab5e56b199bbd88fb51fe"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870121    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870167    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.870191    2465 scope.go:117] "RemoveContainer" containerID="fd0a3ab5f285697717d070472745c94ac46d7e376804e2b2690d8192c539ce06"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881409    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89b975ea350c8ada63866afcc9dfe8d144799fa6442ff30b95e39235ca314606"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.881468    2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de"
	Sep 19 22:25:15 ha-434755 kubelet[2465]: I0919 22:25:15.883877    2465 scope.go:117] "RemoveContainer" containerID="f7365ae03012282e042fcdbb9d87e94b89928381e3b6f701b58d0e425f83b14a"
	Sep 19 22:25:18 ha-434755 kubelet[2465]: I0919 22:25:18.938936    2465 scope.go:117] "RemoveContainer" containerID="7dcf79d61a67e78a7e98abac24d2bff68653f6f436028d21debd03806fd167ff"
	Sep 19 22:29:46 ha-434755 kubelet[2465]: I0919 22:29:46.056213    2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b6d\" (UniqueName: \"kubernetes.io/projected/6a28f377-7c2d-478e-8c2c-bc61b6979e96-kube-api-access-s5b6d\") pod \"busybox-7b57f96db7-v7khr\" (UID: \"6a28f377-7c2d-478e-8c2c-bc61b6979e96\") " pod="default/busybox-7b57f96db7-v7khr"
	Sep 19 22:31:17 ha-434755 kubelet[2465]: E0919 22:31:17.528041    2465 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:37176->[::1]:39331: write tcp [::1]:37176->[::1]:39331: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (547s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 stop --alsologtostderr -v 5
E0919 22:34:01.170647  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 stop --alsologtostderr -v 5: (32.225688378s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 start --wait true --alsologtostderr -v 5
E0919 22:37:25.091679  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:33.466745  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:48.157711  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:42:25.091693  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 start --wait true --alsologtostderr -v 5: exit status 80 (8m32.471464362s)

                                                
                                                
-- stdout --
	* [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Enabled addons: 
	
	* Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-434755-m04" worker node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:34:29.392603  254979 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:34:29.392715  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392724  254979 out.go:374] Setting ErrFile to fd 2...
	I0919 22:34:29.392729  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392941  254979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:34:29.393348  254979 out.go:368] Setting JSON to false
	I0919 22:34:29.394260  254979 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4605,"bootTime":1758316664,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:34:29.394355  254979 start.go:140] virtualization: kvm guest
	I0919 22:34:29.396091  254979 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:34:29.397369  254979 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:34:29.397371  254979 notify.go:220] Checking for updates...
	I0919 22:34:29.399394  254979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:34:29.400491  254979 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:29.401460  254979 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:34:29.402392  254979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:34:29.403394  254979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:34:29.404817  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:29.404928  254979 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:34:29.428811  254979 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:34:29.428942  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.487899  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.477486939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.488017  254979 docker.go:318] overlay module found
	I0919 22:34:29.489668  254979 out.go:179] * Using the docker driver based on existing profile
	I0919 22:34:29.490789  254979 start.go:304] selected driver: docker
	I0919 22:34:29.490803  254979 start.go:918] validating driver "docker" against &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.490958  254979 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:34:29.491069  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.548618  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.539006546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.549315  254979 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:29.549349  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:29.549417  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:29.549484  254979 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.551223  254979 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:34:29.552360  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:29.553540  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:29.554463  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:29.554533  254979 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:34:29.554548  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:29.554553  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:29.554642  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:29.554659  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:29.554803  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.573612  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:29.573628  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:29.573642  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:29.573663  254979 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:29.573715  254979 start.go:364] duration metric: took 34.414µs to acquireMachinesLock for "ha-434755"
	I0919 22:34:29.573732  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:29.573739  254979 fix.go:54] fixHost starting: 
	I0919 22:34:29.573944  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.590456  254979 fix.go:112] recreateIfNeeded on ha-434755: state=Stopped err=<nil>
	W0919 22:34:29.590478  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:29.592146  254979 out.go:252] * Restarting existing docker container for "ha-434755" ...
	I0919 22:34:29.592198  254979 cli_runner.go:164] Run: docker start ha-434755
	I0919 22:34:29.805688  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.822967  254979 kic.go:430] container "ha-434755" state is running.
	I0919 22:34:29.823300  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:29.840845  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.841033  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:29.841096  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:29.858584  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:29.858850  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:29.858861  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:29.859537  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44758->127.0.0.1:32813: read: connection reset by peer
	I0919 22:34:32.994537  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:32.994564  254979 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:34:32.994618  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.011712  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.011959  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.011976  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:34:33.156752  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:33.156836  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.173652  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.173873  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.173889  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:33.306488  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:33.306532  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:33.306552  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:33.306560  254979 provision.go:84] configureAuth start
	I0919 22:34:33.306606  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:33.323565  254979 provision.go:143] copyHostCerts
	I0919 22:34:33.323598  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323624  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:33.323639  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323706  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:33.323780  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323798  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:33.323804  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323829  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:33.323869  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323886  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:33.323892  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323914  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:33.323960  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:34:33.559679  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:33.559738  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:33.559789  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.577865  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:33.672478  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:33.672568  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:34:33.696200  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:33.696267  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:33.719990  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:33.720060  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:33.743555  254979 provision.go:87] duration metric: took 436.981146ms to configureAuth
	I0919 22:34:33.743634  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:33.743848  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:33.743893  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.760563  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.760782  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.760794  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:33.894134  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:33.894169  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:33.894578  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:33.894689  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.912104  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.912369  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.912478  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:34.059005  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:34.059094  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.075824  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.076036  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:34.076054  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:34.214294  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.214323  254979 machine.go:96] duration metric: took 4.373275133s to provisionDockerMachine
	I0919 22:34:34.214337  254979 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:34:34.214348  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:34.214400  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:34.214446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.231190  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.326475  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:34.329765  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:34.329812  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:34.329828  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:34.329839  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:34.329853  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:34.329911  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:34.330025  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:34.330042  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:34.330156  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:34.338505  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:34.361549  254979 start.go:296] duration metric: took 147.197651ms for postStartSetup
	I0919 22:34:34.361611  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:34.361647  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.378413  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.469191  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:34.473539  254979 fix.go:56] duration metric: took 4.899792233s for fixHost
	I0919 22:34:34.473566  254979 start.go:83] releasing machines lock for "ha-434755", held for 4.899839715s
	I0919 22:34:34.473629  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:34.489927  254979 ssh_runner.go:195] Run: cat /version.json
	I0919 22:34:34.489970  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.490024  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:34.490090  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.506577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.507908  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.666358  254979 ssh_runner.go:195] Run: systemctl --version
	I0919 22:34:34.670859  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:34.675244  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:34.693880  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:34.693949  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:34.702353  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:34.702375  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.702401  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.702523  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:34.718289  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:34.727659  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:34.736865  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:34.736911  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:34.745995  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.755127  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:34.764124  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.773283  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:34.782430  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:34.791523  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:34.800544  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:34.809524  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:34.817361  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:34.825188  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:34.890049  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:34.960529  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.960584  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.960629  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:34.973026  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:34.983825  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:35.002291  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:35.012972  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:35.023687  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:35.039432  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:35.042752  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:35.050998  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:35.067853  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:35.132842  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:35.196827  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:35.196991  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:35.215146  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:35.225890  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:35.291005  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:34:36.100785  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.112048  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:34:36.122871  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:34:36.134226  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.144968  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:34:36.215570  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:34:36.283944  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.348465  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:34:36.370429  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:34:36.381048  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.447404  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:34:36.520573  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.532578  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:34:36.532632  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:34:36.536280  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.536339  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.539490  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.573579  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:34:36.573643  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.597609  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.624028  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:34:36.624105  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:36.640631  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:36.644560  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.656165  254979 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:34:36.656309  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:36.656354  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.677616  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.677637  254979 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:34:36.677692  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.698524  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.698549  254979 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:34:36.698563  254979 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:34:36.698688  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:36.698756  254979 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:34:36.750118  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:36.750142  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:36.750153  254979 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:34:36.750179  254979 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:34:36.750289  254979 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:34:36.750306  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:36.750341  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:36.762623  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:36.762741  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:36.762799  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:36.771904  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:36.771964  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:34:36.780568  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:34:36.798205  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:36.815070  254979 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:34:36.831719  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:36.848409  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:36.851767  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.862730  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.930528  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:36.955755  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:34:36.955780  254979 certs.go:194] generating shared ca certs ...
	I0919 22:34:36.955801  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:36.955964  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:34:36.956015  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:34:36.956028  254979 certs.go:256] generating profile certs ...
	I0919 22:34:36.956149  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:34:36.956184  254979 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837
	I0919 22:34:36.956203  254979 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.093694  254979 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 ...
	I0919 22:34:37.093723  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837: {Name:mkb7dc47ca29d762ecbca001badafbd7a0f63f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093875  254979 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 ...
	I0919 22:34:37.093889  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837: {Name:mkfe1145f49b260387004be5cad78abcf22bf7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093983  254979 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:34:37.094141  254979 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:34:37.094347  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:34:37.094373  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.094399  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.094419  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.094430  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.094444  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.094453  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.094465  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.094477  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.094562  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:34:37.094597  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.094607  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.094630  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.094660  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.094692  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.094749  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:37.094791  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.094813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.094829  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.095515  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.127336  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:34:37.150544  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.175327  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:34:37.201819  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:34:37.225372  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.248103  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.271531  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:34:37.294329  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:34:37.316902  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.340094  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:34:37.363279  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:34:37.380576  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.385767  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:34:37.394806  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398055  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398106  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.404576  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.412913  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:34:37.421966  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425379  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425442  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.432256  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.440776  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.449890  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453164  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453215  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.459800  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.468138  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.471431  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:34:37.477659  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:34:37.484148  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:34:37.491177  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:34:37.499070  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:34:37.506362  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:34:37.513842  254979 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:37.513988  254979 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:34:37.537542  254979 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:34:37.549913  254979 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:34:37.549939  254979 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:34:37.550009  254979 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:34:37.564566  254979 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.565106  254979 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-434755" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.565386  254979 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "ha-434755" cluster setting kubeconfig missing "ha-434755" context setting]
	I0919 22:34:37.565797  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.566562  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:34:37.567054  254979 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:34:37.567076  254979 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:34:37.567082  254979 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:34:37.567086  254979 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:34:37.567090  254979 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:34:37.567448  254979 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:34:37.567566  254979 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:34:37.580682  254979 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:34:37.580712  254979 kubeadm.go:593] duration metric: took 30.755549ms to restartPrimaryControlPlane
	I0919 22:34:37.580721  254979 kubeadm.go:394] duration metric: took 66.889653ms to StartCluster
	I0919 22:34:37.580737  254979 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.580803  254979 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.581391  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.581643  254979 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:34:37.581673  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:34:37.581681  254979 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:34:37.582003  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.584304  254979 out.go:179] * Enabled addons: 
	I0919 22:34:37.585620  254979 addons.go:514] duration metric: took 3.930682ms for enable addons: enabled=[]
	I0919 22:34:37.585668  254979 start.go:246] waiting for cluster config update ...
	I0919 22:34:37.585686  254979 start.go:255] writing updated cluster config ...
	I0919 22:34:37.587067  254979 out.go:203] 
	I0919 22:34:37.588682  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.588844  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.590451  254979 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:34:37.591363  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:37.592305  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:37.593270  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:37.593292  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:37.593367  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:37.593388  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:37.593398  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:37.593538  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.620137  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:37.620160  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:37.620173  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:37.620210  254979 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:37.620263  254979 start.go:364] duration metric: took 34.403µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:34:37.620280  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:37.620286  254979 fix.go:54] fixHost starting: m02
	I0919 22:34:37.620582  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.644601  254979 fix.go:112] recreateIfNeeded on ha-434755-m02: state=Stopped err=<nil>
	W0919 22:34:37.644633  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:37.645946  254979 out.go:252] * Restarting existing docker container for "ha-434755-m02" ...
	I0919 22:34:37.646038  254979 cli_runner.go:164] Run: docker start ha-434755-m02
	I0919 22:34:37.949352  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.973649  254979 kic.go:430] container "ha-434755-m02" state is running.
	I0919 22:34:37.974176  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:37.994068  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.994337  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:37.994397  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:38.015752  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:38.016073  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:38.016093  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:38.016827  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42006->127.0.0.1:32818: read: connection reset by peer
	I0919 22:34:41.154622  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.154651  254979 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:34:41.154707  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.173029  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.173245  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.173258  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:34:41.323523  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.323600  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.341537  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.341755  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.341772  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:41.477673  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:41.477715  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:41.477735  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:41.477745  254979 provision.go:84] configureAuth start
	I0919 22:34:41.477795  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:41.495782  254979 provision.go:143] copyHostCerts
	I0919 22:34:41.495828  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495863  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:41.495875  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495952  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:41.496051  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496089  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:41.496098  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496141  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:41.496218  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496251  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:41.496261  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496301  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:41.496386  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:34:41.732873  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:41.732963  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:41.733012  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.750783  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:41.848595  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:41.848667  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:41.873665  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:41.873730  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:41.897993  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:41.898059  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:41.922977  254979 provision.go:87] duration metric: took 445.218722ms to configureAuth
	I0919 22:34:41.923009  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:41.923260  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:41.923309  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.942404  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.942657  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.942672  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:42.078612  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:42.078647  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:42.078854  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:42.078927  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.096405  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.096645  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.096717  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:42.245231  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:42.245405  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.264515  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.264739  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.264757  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:53.646301  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-19 22:32:30.139641518 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:34:42.242101116 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:34:53.646338  254979 machine.go:96] duration metric: took 15.651988955s to provisionDockerMachine
	I0919 22:34:53.646360  254979 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:34:53.646376  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:53.646456  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:53.646544  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.668809  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.779279  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:53.785219  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:53.785262  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:53.785275  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:53.785285  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:53.785298  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:53.785375  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:53.785594  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:53.785613  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:53.785773  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:53.798199  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:53.832463  254979 start.go:296] duration metric: took 186.083271ms for postStartSetup
	I0919 22:34:53.832621  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:53.832679  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.858619  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.960212  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:53.966312  254979 fix.go:56] duration metric: took 16.34601659s for fixHost
	I0919 22:34:53.966340  254979 start.go:83] releasing machines lock for "ha-434755-m02", held for 16.346069332s
	I0919 22:34:53.966412  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:53.990694  254979 out.go:179] * Found network options:
	I0919 22:34:53.992467  254979 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:34:53.994237  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:53.994289  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:53.994386  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:53.994425  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:53.994439  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.994522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:54.015258  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.015577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.109387  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:54.187526  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:54.187642  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:54.196971  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:54.196996  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.197029  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.197147  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.213126  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:54.222913  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:54.232770  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:54.232827  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:54.242273  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.252123  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:54.261682  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.271056  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:54.279900  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:54.289084  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:54.298339  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:54.307617  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:54.315730  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:54.323734  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:54.421356  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:54.553517  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.553570  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.553663  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:54.567589  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.578657  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:54.598306  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.610176  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:54.621475  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.637463  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:54.640827  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:54.649159  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:54.666320  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:54.793386  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:54.888125  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:54.888175  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:54.907425  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:54.918281  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:55.016695  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:35:12.030390  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.013654873s)
	I0919 22:35:12.030485  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:35:12.046005  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:35:12.062445  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:35:12.090262  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.103570  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:35:12.186633  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:35:12.276082  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.351919  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:35:12.379448  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:35:12.392643  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.476410  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:35:12.559621  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.572526  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:35:12.572588  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:35:12.576491  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:35:12.576564  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:35:12.579932  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:35:12.614468  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:35:12.614551  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.641603  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.668151  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:35:12.669148  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:35:12.670150  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:12.686876  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:35:12.690808  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:12.702422  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:35:12.702695  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:12.702948  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:35:12.719929  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:35:12.720184  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:35:12.720198  254979 certs.go:194] generating shared ca certs ...
	I0919 22:35:12.720233  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:35:12.720391  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:35:12.720481  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:35:12.720510  254979 certs.go:256] generating profile certs ...
	I0919 22:35:12.720610  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:35:12.720697  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.90db4c9c
	I0919 22:35:12.720757  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:35:12.720773  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:35:12.720795  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:35:12.720813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:35:12.720830  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:35:12.720847  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:35:12.720866  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:35:12.720884  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:35:12.720902  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:35:12.720966  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:35:12.721023  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:35:12.721036  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:35:12.721076  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:35:12.721111  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:35:12.721146  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:35:12.721242  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:35:12.721296  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:12.721327  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:35:12.721346  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:35:12.721427  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:35:12.738056  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:35:12.825819  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:35:12.830244  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:35:12.843478  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:35:12.847190  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:35:12.859905  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:35:12.863484  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:35:12.875902  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:35:12.879295  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:35:12.891480  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:35:12.894661  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:35:12.906895  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:35:12.910234  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:35:12.922725  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:35:12.947840  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:35:12.972792  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:35:12.997517  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:35:13.022085  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:35:13.047365  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:35:13.072377  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:35:13.099533  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:35:13.134971  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:35:13.167709  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:35:13.206266  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:35:13.239665  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:35:13.266921  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:35:13.294118  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:35:13.321828  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:35:13.343786  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:35:13.366845  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:35:13.389708  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:35:13.412481  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:35:13.419706  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:35:13.431765  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436337  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436418  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.444550  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:35:13.455699  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:35:13.468242  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472223  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472279  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.480857  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:35:13.491084  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:35:13.501753  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505877  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505933  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.512774  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:35:13.522847  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:35:13.526705  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:35:13.533354  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:35:13.540112  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:35:13.546612  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:35:13.553144  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:35:13.560238  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:35:13.568285  254979 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:35:13.568401  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:35:13.568434  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:35:13.568481  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:35:13.580554  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:13.580617  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:35:13.580665  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:35:13.589430  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:35:13.589492  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:35:13.598285  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:35:13.616427  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:35:13.634472  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:35:13.652547  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:35:13.656296  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:13.667861  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.787658  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.800614  254979 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:35:13.800904  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:13.802716  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:35:13.803906  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.907011  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.921258  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:35:13.921345  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:35:13.921671  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196598  254979 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:35:44.196684  254979 node_ready.go:38] duration metric: took 30.274978813s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196715  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:44.196778  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:44.696945  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.197315  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.697715  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.197708  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.697596  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.197741  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.697273  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.197137  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.696833  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.197637  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.696961  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.196947  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.697707  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.197053  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.697638  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.197170  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.697689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.197733  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.696981  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.197207  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.697745  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.197895  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.697086  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.197535  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.209362  254979 api_server.go:72] duration metric: took 42.408698512s to wait for apiserver process to appear ...
	I0919 22:35:56.209386  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:56.209404  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:56.215038  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:56.215908  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:56.215931  254979 api_server.go:131] duration metric: took 6.538723ms to wait for apiserver health ...
	I0919 22:35:56.215940  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:56.222250  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:56.222279  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.222289  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.222294  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.222299  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.222306  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.222311  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.222316  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.222322  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.222328  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.222334  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.222342  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.222348  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.222353  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.222359  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.222373  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222385  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222394  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.222401  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.222409  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.222415  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.222424  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.222432  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.222444  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.222452  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.222459  254979 system_pods.go:74] duration metric: took 6.512304ms to wait for pod list to return data ...
	I0919 22:35:56.222473  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:56.224777  254979 default_sa.go:45] found service account: "default"
	I0919 22:35:56.224800  254979 default_sa.go:55] duration metric: took 2.313413ms for default service account to be created ...
	I0919 22:35:56.224809  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:56.230069  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:56.230095  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.230102  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.230139  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.230151  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.230157  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.230165  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.230173  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.230181  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.230189  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.230194  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.230202  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.230207  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.230215  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.230221  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.230234  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230245  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230256  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.230266  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.230271  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.230279  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.230288  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.230293  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.230301  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.230305  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.230316  254979 system_pods.go:126] duration metric: took 5.500729ms to wait for k8s-apps to be running ...
	I0919 22:35:56.230326  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:56.230378  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:56.242876  254979 system_svc.go:56] duration metric: took 12.542054ms WaitForService to wait for kubelet
	I0919 22:35:56.242903  254979 kubeadm.go:578] duration metric: took 42.442241309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:56.242932  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:56.245954  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.245981  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.245997  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246003  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246012  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246017  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246026  254979 node_conditions.go:105] duration metric: took 3.08778ms to run NodePressure ...
	I0919 22:35:56.246039  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:35:56.246070  254979 start.go:255] writing updated cluster config ...
	I0919 22:35:56.248251  254979 out.go:203] 
	I0919 22:35:56.249459  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:56.249573  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.250931  254979 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:35:56.252085  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:35:56.253026  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:35:56.253903  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:35:56.253926  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:35:56.253965  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:35:56.254039  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:35:56.254055  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:35:56.254179  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.276167  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:35:56.276192  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:35:56.276216  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:35:56.276247  254979 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:35:56.276314  254979 start.go:364] duration metric: took 46.178µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:35:56.276338  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:35:56.276347  254979 fix.go:54] fixHost starting: m03
	I0919 22:35:56.276613  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.293331  254979 fix.go:112] recreateIfNeeded on ha-434755-m03: state=Stopped err=<nil>
	W0919 22:35:56.293356  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:35:56.294620  254979 out.go:252] * Restarting existing docker container for "ha-434755-m03" ...
	I0919 22:35:56.294682  254979 cli_runner.go:164] Run: docker start ha-434755-m03
	I0919 22:35:56.544302  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.562451  254979 kic.go:430] container "ha-434755-m03" state is running.
	I0919 22:35:56.562784  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:35:56.581792  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.581992  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:35:56.582050  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:56.600026  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:56.600332  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:56.600350  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:35:56.600929  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44862->127.0.0.1:32823: read: connection reset by peer
	I0919 22:35:59.744345  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.744380  254979 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:35:59.744468  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.762953  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.763211  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.763229  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:35:59.918402  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.918522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.938390  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.938725  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.938751  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:36:00.092594  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:00.092621  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:36:00.092638  254979 ubuntu.go:190] setting up certificates
	I0919 22:36:00.092648  254979 provision.go:84] configureAuth start
	I0919 22:36:00.092699  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:00.111285  254979 provision.go:143] copyHostCerts
	I0919 22:36:00.111330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111368  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:36:00.111377  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111550  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:36:00.111664  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111692  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:36:00.111702  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111734  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:36:00.111789  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111815  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:36:00.111822  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111851  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:36:00.111906  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:36:00.494093  254979 provision.go:177] copyRemoteCerts
	I0919 22:36:00.494184  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:36:00.494248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.515583  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:00.617642  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:36:00.617700  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:36:00.643926  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:36:00.643995  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:36:00.672921  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:36:00.672984  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:36:00.696141  254979 provision.go:87] duration metric: took 603.480386ms to configureAuth
	I0919 22:36:00.696172  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:36:00.696410  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:00.696474  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.713380  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.713659  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.713680  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:36:00.854280  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:36:00.854306  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:36:00.854441  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:36:00.854527  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.877075  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.877355  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.877461  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:36:01.044491  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:36:01.044612  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.068534  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:01.068808  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:01.068828  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:36:01.223884  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:01.223911  254979 machine.go:96] duration metric: took 4.641904945s to provisionDockerMachine
	I0919 22:36:01.223926  254979 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:36:01.223940  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:36:01.224000  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:36:01.224053  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.247249  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.353476  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:36:01.356784  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:36:01.356827  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:36:01.356837  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:36:01.356847  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:36:01.356861  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:36:01.356914  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:36:01.356983  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:36:01.356995  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:36:01.357079  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:36:01.366123  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:01.390127  254979 start.go:296] duration metric: took 166.185556ms for postStartSetup
	I0919 22:36:01.390194  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:01.390248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.407444  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.500338  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:36:01.504828  254979 fix.go:56] duration metric: took 5.228477836s for fixHost
	I0919 22:36:01.504853  254979 start.go:83] releasing machines lock for "ha-434755-m03", held for 5.228525958s
	I0919 22:36:01.504916  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:01.524319  254979 out.go:179] * Found network options:
	I0919 22:36:01.525507  254979 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:36:01.526520  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526544  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526563  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526574  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:36:01.526649  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:36:01.526654  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:36:01.526686  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.526705  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.544526  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.545603  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.637520  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:36:01.728766  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:36:01.728826  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:36:01.738432  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:36:01.738466  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:01.738512  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:01.738626  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:01.755304  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:36:01.764834  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:36:01.774412  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:36:01.774471  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:36:01.783943  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.793341  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:36:01.802524  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.811594  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:36:01.821804  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:36:01.831556  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:36:01.840844  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:36:01.850193  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:36:01.858696  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:36:01.866797  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:01.986845  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:36:02.197731  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:02.197787  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:02.197844  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:36:02.210890  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.222293  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:36:02.239996  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.251285  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:36:02.262578  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:02.279146  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:36:02.282932  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:36:02.291330  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:36:02.310148  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:36:02.435893  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:36:02.556587  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:36:02.556638  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:36:02.575909  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:36:02.587513  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:02.699861  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:36:33.801843  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.101937915s)
	I0919 22:36:33.801930  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:36:33.818125  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:36:33.834866  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:36:33.856162  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:33.868263  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:36:33.959996  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:36:34.048061  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.129937  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:36:34.153114  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:36:34.164068  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.253067  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:36:34.329305  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:34.341450  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:36:34.341524  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:36:34.345717  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:36:34.345785  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:36:34.349309  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:36:34.384417  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:36:34.384478  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.410290  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.435551  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:36:34.436601  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:36:34.437771  254979 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:36:34.438757  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:36:34.455686  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:36:34.459411  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:34.471099  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:36:34.471369  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:34.471706  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:36:34.488100  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:36:34.488367  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:36:34.488381  254979 certs.go:194] generating shared ca certs ...
	I0919 22:36:34.488395  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:36:34.488553  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:36:34.488618  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:36:34.488633  254979 certs.go:256] generating profile certs ...
	I0919 22:36:34.488734  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:36:34.488804  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:36:34.488858  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:36:34.488871  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:36:34.488892  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:36:34.488912  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:36:34.488929  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:36:34.488945  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:36:34.488961  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:36:34.488983  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:36:34.489000  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:36:34.489057  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:36:34.489095  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:36:34.489107  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:36:34.489136  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:36:34.489176  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:36:34.489207  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:36:34.489261  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:34.489295  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:34.489311  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:36:34.489330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:36:34.489388  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:36:34.506474  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:36:34.592737  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:36:34.596550  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:36:34.609026  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:36:34.612572  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:36:34.624601  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:36:34.627756  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:36:34.639526  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:36:34.642628  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:36:34.654080  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:36:34.657248  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:36:34.668694  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:36:34.671921  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:36:34.683466  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:36:34.706717  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:36:34.729514  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:36:34.752135  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:36:34.775534  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:36:34.798386  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:36:34.821220  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:36:34.844089  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:36:34.869124  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:36:34.903928  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:36:34.937896  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:36:34.975415  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:36:35.003119  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:36:35.033569  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:36:35.067233  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:36:35.092336  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:36:35.121987  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:36:35.159147  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:36:35.187449  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:36:35.196710  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:36:35.210371  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215556  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215667  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.226373  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:36:35.242338  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:36:35.257634  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.262962  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.263018  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.272303  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:36:35.284458  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:36:35.297192  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.302970  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.303198  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.312827  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:36:35.325971  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:36:35.330277  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:36:35.340364  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:36:35.350648  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:36:35.360874  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:36:35.371688  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:36:35.380714  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:36:35.389839  254979 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:36:35.389978  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:36:35.390024  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:36:35.390079  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:36:35.406530  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:35.406626  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:36:35.406688  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:36:35.416527  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:36:35.416590  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:36:35.428557  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:36:35.448698  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:36:35.468117  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:36:35.487717  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:36:35.491337  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:35.502239  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.627390  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.641188  254979 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:36:35.641510  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:35.647624  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:36:35.648653  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.764651  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.779233  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:36:35.779307  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:36:35.779583  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782664  254979 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:36:35.782690  254979 node_ready.go:38] duration metric: took 3.089431ms for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782710  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:36:35.782756  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.283749  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.783801  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.283597  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.783305  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.283177  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.783246  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.283742  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.783802  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.283143  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.783619  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.283703  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.783799  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.283102  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.783689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.282927  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.783272  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.283621  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.783685  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.283492  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.783334  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.283701  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.783449  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.283236  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.783314  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.283694  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.783679  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.283688  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.783717  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.797519  254979 api_server.go:72] duration metric: took 14.156281107s to wait for apiserver process to appear ...
	I0919 22:36:49.797549  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:36:49.797570  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:36:49.801827  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:36:49.802688  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:36:49.802713  254979 api_server.go:131] duration metric: took 5.156138ms to wait for apiserver health ...
	I0919 22:36:49.802724  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:36:49.808731  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:36:49.808759  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.808765  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.808769  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.808774  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.808786  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.808797  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.808802  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.808807  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.808815  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.808820  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.808827  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.808832  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.808840  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.808845  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.808851  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.808857  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.808866  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.808877  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.808886  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.808890  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.808898  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.808903  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.808910  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.808914  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.808924  254979 system_pods.go:74] duration metric: took 6.193414ms to wait for pod list to return data ...
	I0919 22:36:49.808934  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:36:49.811398  254979 default_sa.go:45] found service account: "default"
	I0919 22:36:49.811416  254979 default_sa.go:55] duration metric: took 2.472816ms for default service account to be created ...
	I0919 22:36:49.811424  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:36:49.816515  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:36:49.816539  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.816545  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.816549  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.816553  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.816557  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.816560  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.816563  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.816566  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.816570  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.816573  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.816579  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.816583  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.816586  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.816590  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.816593  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.816600  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.816608  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.816614  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.816617  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.816620  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.816624  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.816627  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.816630  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.816632  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.816638  254979 system_pods.go:126] duration metric: took 5.209961ms to wait for k8s-apps to be running ...
	I0919 22:36:49.816646  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:36:49.816685  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:49.829643  254979 system_svc.go:56] duration metric: took 12.988959ms WaitForService to wait for kubelet
	I0919 22:36:49.829668  254979 kubeadm.go:578] duration metric: took 14.188435808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:36:49.829689  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:36:49.832790  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832809  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832821  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832826  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832831  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832839  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832844  254979 node_conditions.go:105] duration metric: took 3.149763ms to run NodePressure ...
	I0919 22:36:49.832857  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:36:49.832880  254979 start.go:255] writing updated cluster config ...
	I0919 22:36:49.834545  254979 out.go:203] 
	I0919 22:36:49.835774  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:49.835888  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.837288  254979 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:36:49.838260  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:36:49.839218  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:36:49.840185  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:36:49.840202  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:36:49.840217  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:36:49.840288  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:36:49.840299  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:36:49.840387  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.860086  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:36:49.860107  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:36:49.860127  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:36:49.860154  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:36:49.860216  254979 start.go:364] duration metric: took 42.254µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:36:49.860236  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:36:49.860245  254979 fix.go:54] fixHost starting: m04
	I0919 22:36:49.860537  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:49.877660  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:36:49.877688  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:36:49.879872  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:36:49.879927  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:36:50.108344  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:50.127577  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:36:50.127896  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:36:50.145596  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:50.145849  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:36:50.145921  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:36:50.163888  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:50.164152  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:36:50.164171  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:36:50.164828  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56462->127.0.0.1:32828: read: connection reset by peer
	I0919 22:36:53.166776  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:56.168046  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:59.169790  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:02.171741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:05.172828  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:08.173440  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:11.174724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:14.176746  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:17.178760  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:20.179240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:23.181529  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:26.182690  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:29.183750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:32.185732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:35.186818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:38.187492  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:41.188831  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:44.189595  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:47.191778  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:50.192786  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:53.193740  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:56.194732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:59.195773  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:02.197710  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:05.198608  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:08.199769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:11.200694  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:14.201718  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:17.203754  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:20.204819  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:23.207054  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:26.207724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:29.208708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:32.210377  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:35.211423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:38.212678  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:41.213761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:44.216005  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:47.217723  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:50.218834  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:53.220905  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:56.221494  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:59.222787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:02.224748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:05.225885  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:08.226688  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:11.228737  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:14.230719  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:17.232761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.233716  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.234909  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.236732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.237733  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.239782  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.240787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.241853  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.243182  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.245159  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.246728  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.247035  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:50.247075  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:39:50.247172  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.267390  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.267465  254979 machine.go:96] duration metric: took 3m0.121600261s to provisionDockerMachine
	I0919 22:39:50.267561  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:50.267599  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.284438  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.284611  254979 retry.go:31] will retry after 316.809243ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.601960  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.624526  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.624657  254979 retry.go:31] will retry after 330.8195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.956237  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.973928  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.974043  254979 retry.go:31] will retry after 838.035272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.812938  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.833782  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:51.833951  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:51.833974  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.834032  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:51.834079  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.854105  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:51.854225  254979 retry.go:31] will retry after 224.006538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.078741  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.096705  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.096817  254979 retry.go:31] will retry after 423.331741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.520446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.540094  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.540200  254979 retry.go:31] will retry after 355.89061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.896715  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.915594  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.915696  254979 retry.go:31] will retry after 642.935309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.559619  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:53.577650  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:53.577803  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577829  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.577840  254979 fix.go:56] duration metric: took 3m3.717595523s for fixHost
	I0919 22:39:53.577850  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.717623259s
	W0919 22:39:53.577867  254979 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577986  254979 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.578002  254979 start.go:729] Will try again in 5 seconds ...
	I0919 22:39:58.578679  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:58.578811  254979 start.go:364] duration metric: took 67.723µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:39:58.578838  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:58.578849  254979 fix.go:54] fixHost starting: m04
	I0919 22:39:58.579176  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.599096  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:39:58.599126  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:58.600560  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:39:58.600634  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:39:58.859923  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.879236  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:39:58.879668  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:39:58.897236  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:39:58.897463  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:58.897552  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:39:58.918053  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:58.918271  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:39:58.918281  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:58.918874  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38044->127.0.0.1:32833: read: connection reset by peer
	I0919 22:40:01.920959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:04.921476  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:07.922288  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:10.923340  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:13.923844  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:16.925745  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:19.926668  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:22.928799  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:25.930210  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:28.930708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:31.933147  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:34.934423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:37.934726  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:40.935749  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:43.937730  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:46.940224  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:49.940869  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:52.941959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:55.943080  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:58.944241  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:01.945832  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:04.946150  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:07.947240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:10.947732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:13.949692  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:16.951725  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:19.952381  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:22.953741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:25.954706  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:28.955793  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:31.957862  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:34.959138  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:37.960247  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:40.961431  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:43.962702  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:46.964762  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:49.965365  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:52.966748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:55.968435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:58.968992  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:01.970768  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:04.971818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:07.972196  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:10.973355  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:13.974698  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:16.976791  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:19.977362  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:22.979658  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:25.981435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.981739  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.983953  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.984393  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.984732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.985736  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.987769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.989756  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.990750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.991490  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.991855  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.992596  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:58.992632  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:42:58.992719  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.013746  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.013831  254979 machine.go:96] duration metric: took 3m0.116353121s to provisionDockerMachine
	I0919 22:42:59.013918  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:59.013953  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.033883  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.033989  254979 retry.go:31] will retry after 316.823283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.351622  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.370204  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.370320  254979 retry.go:31] will retry after 311.292492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.682751  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.702069  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.702202  254979 retry.go:31] will retry after 591.889704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.294731  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.313949  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:00.314105  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:00.314125  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.314184  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:00.314230  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.331741  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.331862  254979 retry.go:31] will retry after 207.410605ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.540373  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.558832  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.558943  254979 retry.go:31] will retry after 400.484554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.960435  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.980834  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.980981  254979 retry.go:31] will retry after 805.175329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.786666  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:01.804452  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:01.804589  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.804609  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.804626  254979 fix.go:56] duration metric: took 3m3.225778678s for fixHost
	I0919 22:43:01.804633  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.225810313s
	W0919 22:43:01.804739  254979 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.806803  254979 out.go:203] 
	W0919 22:43:01.808013  254979 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.808027  254979 out.go:285] * 
	* 
	W0919 22:43:01.810171  254979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:43:01.811468  254979 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-434755 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 node list --alsologtostderr -v 5
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-434755	192.168.49.2
ha-434755-m02	192.168.49.3
ha-434755-m03	192.168.49.4
ha-434755-m04	

                                                
                                                
After restart: ha-434755	192.168.49.2
ha-434755-m02	192.168.49.3
ha-434755-m03	192.168.49.4
ha-434755-m04	192.168.49.5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:34:29.615072967Z",
	            "FinishedAt": "2025-09-19T22:34:29.008814579Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74329b990d9dce1255e17e62df25a8a9f852fdd2c0a3169e4fe5efa476dd74f4",
	            "SandboxKey": "/var/run/docker/netns/74329b990d9d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:d1:ee:b6:45:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "d75b4c607beec906838273796c0d4d2073838732be19fc5120b629f9aef39297",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.203379031s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ node    │ ha-434755 node start m02 --alsologtostderr -v 5                                                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ stop    │ ha-434755 stop --alsologtostderr -v 5                                                                                               │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:34 UTC │
	│ start   │ ha-434755 start --wait true --alsologtostderr -v 5                                                                                  │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:34 UTC │                     │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:34:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:34:29.392603  254979 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:34:29.392715  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392724  254979 out.go:374] Setting ErrFile to fd 2...
	I0919 22:34:29.392729  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392941  254979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:34:29.393348  254979 out.go:368] Setting JSON to false
	I0919 22:34:29.394260  254979 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4605,"bootTime":1758316664,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:34:29.394355  254979 start.go:140] virtualization: kvm guest
	I0919 22:34:29.396091  254979 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:34:29.397369  254979 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:34:29.397371  254979 notify.go:220] Checking for updates...
	I0919 22:34:29.399394  254979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:34:29.400491  254979 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:29.401460  254979 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:34:29.402392  254979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:34:29.403394  254979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:34:29.404817  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:29.404928  254979 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:34:29.428811  254979 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:34:29.428942  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.487899  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.477486939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.488017  254979 docker.go:318] overlay module found
	I0919 22:34:29.489668  254979 out.go:179] * Using the docker driver based on existing profile
	I0919 22:34:29.490789  254979 start.go:304] selected driver: docker
	I0919 22:34:29.490803  254979 start.go:918] validating driver "docker" against &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.490958  254979 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:34:29.491069  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.548618  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.539006546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.549315  254979 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:29.549349  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:29.549417  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:29.549484  254979 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.551223  254979 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:34:29.552360  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:29.553540  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:29.554463  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:29.554533  254979 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:34:29.554548  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:29.554553  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:29.554642  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:29.554659  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:29.554803  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.573612  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:29.573628  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:29.573642  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:29.573663  254979 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:29.573715  254979 start.go:364] duration metric: took 34.414µs to acquireMachinesLock for "ha-434755"
	I0919 22:34:29.573732  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:29.573739  254979 fix.go:54] fixHost starting: 
	I0919 22:34:29.573944  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.590456  254979 fix.go:112] recreateIfNeeded on ha-434755: state=Stopped err=<nil>
	W0919 22:34:29.590478  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:29.592146  254979 out.go:252] * Restarting existing docker container for "ha-434755" ...
	I0919 22:34:29.592198  254979 cli_runner.go:164] Run: docker start ha-434755
	I0919 22:34:29.805688  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.822967  254979 kic.go:430] container "ha-434755" state is running.
	I0919 22:34:29.823300  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:29.840845  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.841033  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:29.841096  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:29.858584  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:29.858850  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:29.858861  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:29.859537  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44758->127.0.0.1:32813: read: connection reset by peer
	I0919 22:34:32.994537  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:32.994564  254979 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:34:32.994618  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.011712  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.011959  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.011976  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:34:33.156752  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:33.156836  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.173652  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.173873  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.173889  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:33.306488  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:33.306532  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:33.306552  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:33.306560  254979 provision.go:84] configureAuth start
	I0919 22:34:33.306606  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:33.323565  254979 provision.go:143] copyHostCerts
	I0919 22:34:33.323598  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323624  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:33.323639  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323706  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:33.323780  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323798  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:33.323804  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323829  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:33.323869  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323886  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:33.323892  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323914  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:33.323960  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:34:33.559679  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:33.559738  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:33.559789  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.577865  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:33.672478  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:33.672568  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:34:33.696200  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:33.696267  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:33.719990  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:33.720060  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:33.743555  254979 provision.go:87] duration metric: took 436.981146ms to configureAuth
	I0919 22:34:33.743634  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:33.743848  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:33.743893  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.760563  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.760782  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.760794  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:33.894134  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:33.894169  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:33.894578  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:33.894689  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.912104  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.912369  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.912478  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:34.059005  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:34.059094  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.075824  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.076036  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:34.076054  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:34.214294  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.214323  254979 machine.go:96] duration metric: took 4.373275133s to provisionDockerMachine
	I0919 22:34:34.214337  254979 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:34:34.214348  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:34.214400  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:34.214446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.231190  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.326475  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:34.329765  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:34.329812  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:34.329828  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:34.329839  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:34.329853  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:34.329911  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:34.330025  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:34.330042  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:34.330156  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:34.338505  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:34.361549  254979 start.go:296] duration metric: took 147.197651ms for postStartSetup
	I0919 22:34:34.361611  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:34.361647  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.378413  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.469191  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:34.473539  254979 fix.go:56] duration metric: took 4.899792233s for fixHost
	I0919 22:34:34.473566  254979 start.go:83] releasing machines lock for "ha-434755", held for 4.899839715s
	I0919 22:34:34.473629  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:34.489927  254979 ssh_runner.go:195] Run: cat /version.json
	I0919 22:34:34.489970  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.490024  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:34.490090  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.506577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.507908  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.666358  254979 ssh_runner.go:195] Run: systemctl --version
	I0919 22:34:34.670859  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:34.675244  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:34.693880  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:34.693949  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:34.702353  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:34.702375  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.702401  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.702523  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:34.718289  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:34.727659  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:34.736865  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:34.736911  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:34.745995  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.755127  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:34.764124  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.773283  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:34.782430  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:34.791523  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:34.800544  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:34.809524  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:34.817361  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:34.825188  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:34.890049  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:34.960529  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.960584  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.960629  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:34.973026  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:34.983825  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:35.002291  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:35.012972  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:35.023687  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:35.039432  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:35.042752  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:35.050998  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:35.067853  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:35.132842  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:35.196827  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:35.196991  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:35.215146  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:35.225890  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:35.291005  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:34:36.100785  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.112048  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:34:36.122871  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:34:36.134226  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.144968  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:34:36.215570  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:34:36.283944  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.348465  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:34:36.370429  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:34:36.381048  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.447404  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:34:36.520573  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.532578  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:34:36.532632  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:34:36.536280  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.536339  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.539490  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.573579  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:34:36.573643  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.597609  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.624028  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:34:36.624105  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:36.640631  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:36.644560  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.656165  254979 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:34:36.656309  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:36.656354  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.677616  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.677637  254979 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:34:36.677692  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.698524  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.698549  254979 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:34:36.698563  254979 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:34:36.698688  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:36.698756  254979 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:34:36.750118  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:36.750142  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:36.750153  254979 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:34:36.750179  254979 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:34:36.750289  254979 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:34:36.750306  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:36.750341  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:36.762623  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:36.762741  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:36.762799  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:36.771904  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:36.771964  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:34:36.780568  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:34:36.798205  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:36.815070  254979 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:34:36.831719  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:36.848409  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:36.851767  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.862730  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.930528  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:36.955755  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:34:36.955780  254979 certs.go:194] generating shared ca certs ...
	I0919 22:34:36.955801  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:36.955964  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:34:36.956015  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:34:36.956028  254979 certs.go:256] generating profile certs ...
	I0919 22:34:36.956149  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:34:36.956184  254979 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837
	I0919 22:34:36.956203  254979 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.093694  254979 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 ...
	I0919 22:34:37.093723  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837: {Name:mkb7dc47ca29d762ecbca001badafbd7a0f63f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093875  254979 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 ...
	I0919 22:34:37.093889  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837: {Name:mkfe1145f49b260387004be5cad78abcf22bf7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093983  254979 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:34:37.094141  254979 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:34:37.094347  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:34:37.094373  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.094399  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.094419  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.094430  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.094444  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.094453  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.094465  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.094477  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.094562  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:34:37.094597  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.094607  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.094630  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.094660  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.094692  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.094749  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:37.094791  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.094813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.094829  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.095515  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.127336  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:34:37.150544  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.175327  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:34:37.201819  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:34:37.225372  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.248103  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.271531  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:34:37.294329  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:34:37.316902  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.340094  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:34:37.363279  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:34:37.380576  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.385767  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:34:37.394806  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398055  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398106  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.404576  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.412913  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:34:37.421966  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425379  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425442  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.432256  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.440776  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.449890  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453164  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453215  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.459800  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.468138  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.471431  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:34:37.477659  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:34:37.484148  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:34:37.491177  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:34:37.499070  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:34:37.506362  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:34:37.513842  254979 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:37.513988  254979 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:34:37.537542  254979 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:34:37.549913  254979 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:34:37.549939  254979 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:34:37.550009  254979 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:34:37.564566  254979 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.565106  254979 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-434755" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.565386  254979 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "ha-434755" cluster setting kubeconfig missing "ha-434755" context setting]
	I0919 22:34:37.565797  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.566562  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:34:37.567054  254979 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:34:37.567076  254979 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:34:37.567082  254979 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:34:37.567086  254979 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:34:37.567090  254979 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:34:37.567448  254979 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:34:37.567566  254979 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:34:37.580682  254979 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:34:37.580712  254979 kubeadm.go:593] duration metric: took 30.755549ms to restartPrimaryControlPlane
	I0919 22:34:37.580721  254979 kubeadm.go:394] duration metric: took 66.889653ms to StartCluster
	I0919 22:34:37.580737  254979 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.580803  254979 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.581391  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.581643  254979 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:34:37.581673  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:34:37.581681  254979 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:34:37.582003  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.584304  254979 out.go:179] * Enabled addons: 
	I0919 22:34:37.585620  254979 addons.go:514] duration metric: took 3.930682ms for enable addons: enabled=[]
	I0919 22:34:37.585668  254979 start.go:246] waiting for cluster config update ...
	I0919 22:34:37.585686  254979 start.go:255] writing updated cluster config ...
	I0919 22:34:37.587067  254979 out.go:203] 
	I0919 22:34:37.588682  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.588844  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.590451  254979 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:34:37.591363  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:37.592305  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:37.593270  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:37.593292  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:37.593367  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:37.593388  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:37.593398  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:37.593538  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.620137  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:37.620160  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:37.620173  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:37.620210  254979 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:37.620263  254979 start.go:364] duration metric: took 34.403µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:34:37.620280  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:37.620286  254979 fix.go:54] fixHost starting: m02
	I0919 22:34:37.620582  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.644601  254979 fix.go:112] recreateIfNeeded on ha-434755-m02: state=Stopped err=<nil>
	W0919 22:34:37.644633  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:37.645946  254979 out.go:252] * Restarting existing docker container for "ha-434755-m02" ...
	I0919 22:34:37.646038  254979 cli_runner.go:164] Run: docker start ha-434755-m02
	I0919 22:34:37.949352  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.973649  254979 kic.go:430] container "ha-434755-m02" state is running.
	I0919 22:34:37.974176  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:37.994068  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.994337  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:37.994397  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:38.015752  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:38.016073  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:38.016093  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:38.016827  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42006->127.0.0.1:32818: read: connection reset by peer
	I0919 22:34:41.154622  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.154651  254979 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:34:41.154707  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.173029  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.173245  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.173258  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:34:41.323523  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.323600  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.341537  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.341755  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.341772  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:41.477673  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:41.477715  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:41.477735  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:41.477745  254979 provision.go:84] configureAuth start
	I0919 22:34:41.477795  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:41.495782  254979 provision.go:143] copyHostCerts
	I0919 22:34:41.495828  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495863  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:41.495875  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495952  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:41.496051  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496089  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:41.496098  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496141  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:41.496218  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496251  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:41.496261  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496301  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:41.496386  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:34:41.732873  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:41.732963  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:41.733012  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.750783  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:41.848595  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:41.848667  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:41.873665  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:41.873730  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:41.897993  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:41.898059  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:41.922977  254979 provision.go:87] duration metric: took 445.218722ms to configureAuth
	I0919 22:34:41.923009  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:41.923260  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:41.923309  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.942404  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.942657  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.942672  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:42.078612  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:42.078647  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:42.078854  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:42.078927  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.096405  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.096645  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.096717  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:42.245231  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:42.245405  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.264515  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.264739  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.264757  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:53.646301  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-19 22:32:30.139641518 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:34:42.242101116 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:34:53.646338  254979 machine.go:96] duration metric: took 15.651988955s to provisionDockerMachine
	I0919 22:34:53.646360  254979 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:34:53.646376  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:53.646456  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:53.646544  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.668809  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.779279  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:53.785219  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:53.785262  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:53.785275  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:53.785285  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:53.785298  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:53.785375  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:53.785594  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:53.785613  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:53.785773  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:53.798199  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:53.832463  254979 start.go:296] duration metric: took 186.083271ms for postStartSetup
	I0919 22:34:53.832621  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:53.832679  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.858619  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.960212  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:53.966312  254979 fix.go:56] duration metric: took 16.34601659s for fixHost
	I0919 22:34:53.966340  254979 start.go:83] releasing machines lock for "ha-434755-m02", held for 16.346069332s
	I0919 22:34:53.966412  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:53.990694  254979 out.go:179] * Found network options:
	I0919 22:34:53.992467  254979 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:34:53.994237  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:53.994289  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:53.994386  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:53.994425  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:53.994439  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.994522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:54.015258  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.015577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.109387  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:54.187526  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:54.187642  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:54.196971  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:54.196996  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.197029  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.197147  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.213126  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:54.222913  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:54.232770  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:54.232827  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:54.242273  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.252123  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:54.261682  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.271056  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:54.279900  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:54.289084  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:54.298339  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:54.307617  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:54.315730  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:54.323734  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:54.421356  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:54.553517  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.553570  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.553663  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:54.567589  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.578657  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:54.598306  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.610176  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:54.621475  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.637463  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:54.640827  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:54.649159  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:54.666320  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:54.793386  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:54.888125  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:54.888175  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:54.907425  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:54.918281  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:55.016695  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:35:12.030390  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.013654873s)
	I0919 22:35:12.030485  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:35:12.046005  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:35:12.062445  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:35:12.090262  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.103570  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:35:12.186633  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:35:12.276082  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.351919  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:35:12.379448  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:35:12.392643  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.476410  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:35:12.559621  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.572526  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:35:12.572588  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:35:12.576491  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:35:12.576564  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:35:12.579932  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:35:12.614468  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:35:12.614551  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.641603  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.668151  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:35:12.669148  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:35:12.670150  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:12.686876  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:35:12.690808  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:12.702422  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:35:12.702695  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:12.702948  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:35:12.719929  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:35:12.720184  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:35:12.720198  254979 certs.go:194] generating shared ca certs ...
	I0919 22:35:12.720233  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:35:12.720391  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:35:12.720481  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:35:12.720510  254979 certs.go:256] generating profile certs ...
	I0919 22:35:12.720610  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:35:12.720697  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.90db4c9c
	I0919 22:35:12.720757  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:35:12.720773  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:35:12.720795  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:35:12.720813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:35:12.720830  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:35:12.720847  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:35:12.720866  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:35:12.720884  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:35:12.720902  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:35:12.720966  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:35:12.721023  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:35:12.721036  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:35:12.721076  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:35:12.721111  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:35:12.721146  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:35:12.721242  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:35:12.721296  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:12.721327  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:35:12.721346  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:35:12.721427  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:35:12.738056  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:35:12.825819  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:35:12.830244  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:35:12.843478  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:35:12.847190  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:35:12.859905  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:35:12.863484  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:35:12.875902  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:35:12.879295  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:35:12.891480  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:35:12.894661  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:35:12.906895  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:35:12.910234  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:35:12.922725  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:35:12.947840  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:35:12.972792  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:35:12.997517  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:35:13.022085  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:35:13.047365  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:35:13.072377  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:35:13.099533  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:35:13.134971  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:35:13.167709  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:35:13.206266  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:35:13.239665  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:35:13.266921  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:35:13.294118  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:35:13.321828  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:35:13.343786  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:35:13.366845  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:35:13.389708  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:35:13.412481  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:35:13.419706  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:35:13.431765  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436337  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436418  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.444550  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:35:13.455699  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:35:13.468242  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472223  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472279  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.480857  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:35:13.491084  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:35:13.501753  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505877  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505933  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.512774  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:35:13.522847  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:35:13.526705  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:35:13.533354  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:35:13.540112  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:35:13.546612  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:35:13.553144  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:35:13.560238  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:35:13.568285  254979 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:35:13.568401  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:35:13.568434  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:35:13.568481  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:35:13.580554  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:13.580617  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:35:13.580665  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:35:13.589430  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:35:13.589492  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:35:13.598285  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:35:13.616427  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:35:13.634472  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:35:13.652547  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:35:13.656296  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:13.667861  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.787658  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.800614  254979 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:35:13.800904  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:13.802716  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:35:13.803906  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.907011  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.921258  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:35:13.921345  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:35:13.921671  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196598  254979 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:35:44.196684  254979 node_ready.go:38] duration metric: took 30.274978813s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196715  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:44.196778  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:44.696945  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.197315  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.697715  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.197708  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.697596  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.197741  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.697273  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.197137  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.696833  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.197637  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.696961  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.196947  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.697707  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.197053  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.697638  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.197170  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.697689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.197733  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.696981  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.197207  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.697745  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.197895  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.697086  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.197535  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.209362  254979 api_server.go:72] duration metric: took 42.408698512s to wait for apiserver process to appear ...
	I0919 22:35:56.209386  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:56.209404  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:56.215038  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:56.215908  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:56.215931  254979 api_server.go:131] duration metric: took 6.538723ms to wait for apiserver health ...
	I0919 22:35:56.215940  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:56.222250  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:56.222279  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.222289  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.222294  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.222299  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.222306  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.222311  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.222316  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.222322  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.222328  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.222334  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.222342  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.222348  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.222353  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.222359  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.222373  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222385  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222394  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.222401  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.222409  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.222415  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.222424  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.222432  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.222444  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.222452  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.222459  254979 system_pods.go:74] duration metric: took 6.512304ms to wait for pod list to return data ...
	I0919 22:35:56.222473  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:56.224777  254979 default_sa.go:45] found service account: "default"
	I0919 22:35:56.224800  254979 default_sa.go:55] duration metric: took 2.313413ms for default service account to be created ...
	I0919 22:35:56.224809  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:56.230069  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:56.230095  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.230102  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.230139  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.230151  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.230157  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.230165  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.230173  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.230181  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.230189  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.230194  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.230202  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.230207  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.230215  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.230221  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.230234  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230245  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230256  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.230266  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.230271  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.230279  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.230288  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.230293  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.230301  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.230305  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.230316  254979 system_pods.go:126] duration metric: took 5.500729ms to wait for k8s-apps to be running ...
	I0919 22:35:56.230326  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:56.230378  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:56.242876  254979 system_svc.go:56] duration metric: took 12.542054ms WaitForService to wait for kubelet
	I0919 22:35:56.242903  254979 kubeadm.go:578] duration metric: took 42.442241309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:56.242932  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:56.245954  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.245981  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.245997  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246003  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246012  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246017  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246026  254979 node_conditions.go:105] duration metric: took 3.08778ms to run NodePressure ...
	I0919 22:35:56.246039  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:35:56.246070  254979 start.go:255] writing updated cluster config ...
	I0919 22:35:56.248251  254979 out.go:203] 
	I0919 22:35:56.249459  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:56.249573  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.250931  254979 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:35:56.252085  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:35:56.253026  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:35:56.253903  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:35:56.253926  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:35:56.253965  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:35:56.254039  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:35:56.254055  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:35:56.254179  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.276167  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:35:56.276192  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:35:56.276216  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:35:56.276247  254979 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:35:56.276314  254979 start.go:364] duration metric: took 46.178µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:35:56.276338  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:35:56.276347  254979 fix.go:54] fixHost starting: m03
	I0919 22:35:56.276613  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.293331  254979 fix.go:112] recreateIfNeeded on ha-434755-m03: state=Stopped err=<nil>
	W0919 22:35:56.293356  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:35:56.294620  254979 out.go:252] * Restarting existing docker container for "ha-434755-m03" ...
	I0919 22:35:56.294682  254979 cli_runner.go:164] Run: docker start ha-434755-m03
	I0919 22:35:56.544302  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.562451  254979 kic.go:430] container "ha-434755-m03" state is running.
	I0919 22:35:56.562784  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:35:56.581792  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.581992  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:35:56.582050  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:56.600026  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:56.600332  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:56.600350  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:35:56.600929  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44862->127.0.0.1:32823: read: connection reset by peer
	I0919 22:35:59.744345  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.744380  254979 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:35:59.744468  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.762953  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.763211  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.763229  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:35:59.918402  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.918522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.938390  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.938725  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.938751  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:36:00.092594  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:00.092621  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:36:00.092638  254979 ubuntu.go:190] setting up certificates
	I0919 22:36:00.092648  254979 provision.go:84] configureAuth start
	I0919 22:36:00.092699  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:00.111285  254979 provision.go:143] copyHostCerts
	I0919 22:36:00.111330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111368  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:36:00.111377  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111550  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:36:00.111664  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111692  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:36:00.111702  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111734  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:36:00.111789  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111815  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:36:00.111822  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111851  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:36:00.111906  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:36:00.494093  254979 provision.go:177] copyRemoteCerts
	I0919 22:36:00.494184  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:36:00.494248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.515583  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:00.617642  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:36:00.617700  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:36:00.643926  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:36:00.643995  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:36:00.672921  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:36:00.672984  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:36:00.696141  254979 provision.go:87] duration metric: took 603.480386ms to configureAuth
	I0919 22:36:00.696172  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:36:00.696410  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:00.696474  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.713380  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.713659  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.713680  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:36:00.854280  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:36:00.854306  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:36:00.854441  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:36:00.854527  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.877075  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.877355  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.877461  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:36:01.044491  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:36:01.044612  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.068534  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:01.068808  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:01.068828  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:36:01.223884  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:01.223911  254979 machine.go:96] duration metric: took 4.641904945s to provisionDockerMachine
	I0919 22:36:01.223926  254979 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:36:01.223940  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:36:01.224000  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:36:01.224053  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.247249  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.353476  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:36:01.356784  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:36:01.356827  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:36:01.356837  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:36:01.356847  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:36:01.356861  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:36:01.356914  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:36:01.356983  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:36:01.356995  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:36:01.357079  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:36:01.366123  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:01.390127  254979 start.go:296] duration metric: took 166.185556ms for postStartSetup
	I0919 22:36:01.390194  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:01.390248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.407444  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.500338  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:36:01.504828  254979 fix.go:56] duration metric: took 5.228477836s for fixHost
	I0919 22:36:01.504853  254979 start.go:83] releasing machines lock for "ha-434755-m03", held for 5.228525958s
	I0919 22:36:01.504916  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:01.524319  254979 out.go:179] * Found network options:
	I0919 22:36:01.525507  254979 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:36:01.526520  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526544  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526563  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526574  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:36:01.526649  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:36:01.526654  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:36:01.526686  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.526705  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.544526  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.545603  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.637520  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:36:01.728766  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:36:01.728826  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:36:01.738432  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:36:01.738466  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:01.738512  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:01.738626  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:01.755304  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:36:01.764834  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:36:01.774412  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:36:01.774471  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:36:01.783943  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.793341  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:36:01.802524  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.811594  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:36:01.821804  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:36:01.831556  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:36:01.840844  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:36:01.850193  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:36:01.858696  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:36:01.866797  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:01.986845  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:36:02.197731  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:02.197787  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:02.197844  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:36:02.210890  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.222293  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:36:02.239996  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.251285  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:36:02.262578  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:02.279146  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:36:02.282932  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:36:02.291330  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:36:02.310148  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:36:02.435893  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:36:02.556587  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:36:02.556638  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:36:02.575909  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:36:02.587513  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:02.699861  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:36:33.801843  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.101937915s)
	I0919 22:36:33.801930  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:36:33.818125  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:36:33.834866  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:36:33.856162  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:33.868263  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:36:33.959996  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:36:34.048061  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.129937  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:36:34.153114  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:36:34.164068  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.253067  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:36:34.329305  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:34.341450  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:36:34.341524  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:36:34.345717  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:36:34.345785  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:36:34.349309  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:36:34.384417  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:36:34.384478  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.410290  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.435551  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:36:34.436601  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:36:34.437771  254979 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:36:34.438757  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:36:34.455686  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:36:34.459411  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:34.471099  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:36:34.471369  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:34.471706  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:36:34.488100  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:36:34.488367  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:36:34.488381  254979 certs.go:194] generating shared ca certs ...
	I0919 22:36:34.488395  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:36:34.488553  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:36:34.488618  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:36:34.488633  254979 certs.go:256] generating profile certs ...
	I0919 22:36:34.488734  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:36:34.488804  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:36:34.488858  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:36:34.488871  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:36:34.488892  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:36:34.488912  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:36:34.488929  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:36:34.488945  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:36:34.488961  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:36:34.488983  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:36:34.489000  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:36:34.489057  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:36:34.489095  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:36:34.489107  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:36:34.489136  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:36:34.489176  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:36:34.489207  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:36:34.489261  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:34.489295  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:34.489311  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:36:34.489330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:36:34.489388  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:36:34.506474  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:36:34.592737  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:36:34.596550  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:36:34.609026  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:36:34.612572  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:36:34.624601  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:36:34.627756  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:36:34.639526  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:36:34.642628  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:36:34.654080  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:36:34.657248  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:36:34.668694  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:36:34.671921  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:36:34.683466  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:36:34.706717  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:36:34.729514  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:36:34.752135  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:36:34.775534  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:36:34.798386  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:36:34.821220  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:36:34.844089  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:36:34.869124  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:36:34.903928  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:36:34.937896  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:36:34.975415  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:36:35.003119  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:36:35.033569  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:36:35.067233  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:36:35.092336  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:36:35.121987  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:36:35.159147  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:36:35.187449  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:36:35.196710  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:36:35.210371  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215556  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215667  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.226373  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:36:35.242338  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:36:35.257634  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.262962  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.263018  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.272303  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:36:35.284458  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:36:35.297192  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.302970  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.303198  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.312827  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:36:35.325971  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:36:35.330277  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:36:35.340364  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:36:35.350648  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:36:35.360874  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:36:35.371688  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:36:35.380714  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:36:35.389839  254979 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:36:35.389978  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:36:35.390024  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:36:35.390079  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:36:35.406530  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:35.406626  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:36:35.406688  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:36:35.416527  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:36:35.416590  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:36:35.428557  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:36:35.448698  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:36:35.468117  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:36:35.487717  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:36:35.491337  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:35.502239  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.627390  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.641188  254979 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:36:35.641510  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:35.647624  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:36:35.648653  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.764651  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.779233  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:36:35.779307  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:36:35.779583  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782664  254979 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:36:35.782690  254979 node_ready.go:38] duration metric: took 3.089431ms for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782710  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:36:35.782756  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.283749  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.783801  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.283597  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.783305  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.283177  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.783246  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.283742  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.783802  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.283143  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.783619  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.283703  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.783799  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.283102  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.783689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.282927  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.783272  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.283621  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.783685  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.283492  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.783334  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.283701  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.783449  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.283236  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.783314  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.283694  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.783679  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.283688  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.783717  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.797519  254979 api_server.go:72] duration metric: took 14.156281107s to wait for apiserver process to appear ...
	I0919 22:36:49.797549  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:36:49.797570  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:36:49.801827  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:36:49.802688  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:36:49.802713  254979 api_server.go:131] duration metric: took 5.156138ms to wait for apiserver health ...
	I0919 22:36:49.802724  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:36:49.808731  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:36:49.808759  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.808765  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.808769  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.808774  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.808786  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.808797  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.808802  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.808807  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.808815  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.808820  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.808827  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.808832  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.808840  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.808845  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.808851  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.808857  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.808866  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.808877  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.808886  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.808890  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.808898  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.808903  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.808910  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.808914  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.808924  254979 system_pods.go:74] duration metric: took 6.193414ms to wait for pod list to return data ...
	I0919 22:36:49.808934  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:36:49.811398  254979 default_sa.go:45] found service account: "default"
	I0919 22:36:49.811416  254979 default_sa.go:55] duration metric: took 2.472816ms for default service account to be created ...
	I0919 22:36:49.811424  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:36:49.816515  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:36:49.816539  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.816545  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.816549  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.816553  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.816557  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.816560  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.816563  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.816566  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.816570  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.816573  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.816579  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.816583  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.816586  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.816590  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.816593  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.816600  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.816608  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.816614  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.816617  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.816620  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.816624  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.816627  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.816630  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.816632  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.816638  254979 system_pods.go:126] duration metric: took 5.209961ms to wait for k8s-apps to be running ...
	I0919 22:36:49.816646  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:36:49.816685  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:49.829643  254979 system_svc.go:56] duration metric: took 12.988959ms WaitForService to wait for kubelet
	I0919 22:36:49.829668  254979 kubeadm.go:578] duration metric: took 14.188435808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:36:49.829689  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:36:49.832790  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832809  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832821  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832826  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832831  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832839  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832844  254979 node_conditions.go:105] duration metric: took 3.149763ms to run NodePressure ...
	I0919 22:36:49.832857  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:36:49.832880  254979 start.go:255] writing updated cluster config ...
	I0919 22:36:49.834545  254979 out.go:203] 
	I0919 22:36:49.835774  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:49.835888  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.837288  254979 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:36:49.838260  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:36:49.839218  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:36:49.840185  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:36:49.840202  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:36:49.840217  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:36:49.840288  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:36:49.840299  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:36:49.840387  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.860086  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:36:49.860107  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:36:49.860127  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:36:49.860154  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:36:49.860216  254979 start.go:364] duration metric: took 42.254µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:36:49.860236  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:36:49.860245  254979 fix.go:54] fixHost starting: m04
	I0919 22:36:49.860537  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:49.877660  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:36:49.877688  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:36:49.879872  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:36:49.879927  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:36:50.108344  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:50.127577  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:36:50.127896  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:36:50.145596  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:50.145849  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:36:50.145921  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:36:50.163888  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:50.164152  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:36:50.164171  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:36:50.164828  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56462->127.0.0.1:32828: read: connection reset by peer
	I0919 22:36:53.166776  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:56.168046  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:59.169790  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:02.171741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:05.172828  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:08.173440  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:11.174724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:14.176746  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:17.178760  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:20.179240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:23.181529  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:26.182690  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:29.183750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:32.185732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:35.186818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:38.187492  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:41.188831  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:44.189595  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:47.191778  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:50.192786  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:53.193740  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:56.194732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:59.195773  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:02.197710  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:05.198608  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:08.199769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:11.200694  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:14.201718  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:17.203754  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:20.204819  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:23.207054  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:26.207724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:29.208708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:32.210377  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:35.211423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:38.212678  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:41.213761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:44.216005  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:47.217723  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:50.218834  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:53.220905  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:56.221494  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:59.222787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:02.224748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:05.225885  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:08.226688  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:11.228737  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:14.230719  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:17.232761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.233716  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.234909  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.236732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.237733  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.239782  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.240787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.241853  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.243182  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.245159  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.246728  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.247035  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:50.247075  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:39:50.247172  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.267390  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.267465  254979 machine.go:96] duration metric: took 3m0.121600261s to provisionDockerMachine
	I0919 22:39:50.267561  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:50.267599  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.284438  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.284611  254979 retry.go:31] will retry after 316.809243ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.601960  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.624526  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.624657  254979 retry.go:31] will retry after 330.8195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.956237  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.973928  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.974043  254979 retry.go:31] will retry after 838.035272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.812938  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.833782  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:51.833951  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:51.833974  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.834032  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:51.834079  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.854105  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:51.854225  254979 retry.go:31] will retry after 224.006538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.078741  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.096705  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.096817  254979 retry.go:31] will retry after 423.331741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.520446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.540094  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.540200  254979 retry.go:31] will retry after 355.89061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.896715  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.915594  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.915696  254979 retry.go:31] will retry after 642.935309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.559619  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:53.577650  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:53.577803  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577829  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.577840  254979 fix.go:56] duration metric: took 3m3.717595523s for fixHost
	I0919 22:39:53.577850  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.717623259s
	W0919 22:39:53.577867  254979 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577986  254979 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.578002  254979 start.go:729] Will try again in 5 seconds ...
	I0919 22:39:58.578679  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:58.578811  254979 start.go:364] duration metric: took 67.723µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:39:58.578838  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:58.578849  254979 fix.go:54] fixHost starting: m04
	I0919 22:39:58.579176  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.599096  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:39:58.599126  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:58.600560  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:39:58.600634  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:39:58.859923  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.879236  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:39:58.879668  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:39:58.897236  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:39:58.897463  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:58.897552  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:39:58.918053  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:58.918271  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:39:58.918281  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:58.918874  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38044->127.0.0.1:32833: read: connection reset by peer
	I0919 22:40:01.920959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:04.921476  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:07.922288  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:10.923340  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:13.923844  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:16.925745  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:19.926668  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:22.928799  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:25.930210  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:28.930708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:31.933147  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:34.934423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:37.934726  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:40.935749  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:43.937730  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:46.940224  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:49.940869  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:52.941959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:55.943080  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:58.944241  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:01.945832  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:04.946150  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:07.947240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:10.947732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:13.949692  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:16.951725  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:19.952381  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:22.953741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:25.954706  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:28.955793  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:31.957862  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:34.959138  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:37.960247  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:40.961431  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:43.962702  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:46.964762  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:49.965365  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:52.966748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:55.968435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:58.968992  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:01.970768  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:04.971818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:07.972196  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:10.973355  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:13.974698  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:16.976791  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:19.977362  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:22.979658  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:25.981435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.981739  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.983953  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.984393  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.984732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.985736  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.987769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.989756  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.990750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.991490  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.991855  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.992596  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:58.992632  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:42:58.992719  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.013746  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.013831  254979 machine.go:96] duration metric: took 3m0.116353121s to provisionDockerMachine
	I0919 22:42:59.013918  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:59.013953  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.033883  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.033989  254979 retry.go:31] will retry after 316.823283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.351622  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.370204  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.370320  254979 retry.go:31] will retry after 311.292492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.682751  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.702069  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.702202  254979 retry.go:31] will retry after 591.889704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.294731  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.313949  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:00.314105  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:00.314125  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.314184  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:00.314230  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.331741  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.331862  254979 retry.go:31] will retry after 207.410605ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.540373  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.558832  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.558943  254979 retry.go:31] will retry after 400.484554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.960435  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.980834  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.980981  254979 retry.go:31] will retry after 805.175329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.786666  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:01.804452  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:01.804589  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.804609  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.804626  254979 fix.go:56] duration metric: took 3m3.225778678s for fixHost
	I0919 22:43:01.804633  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.225810313s
	W0919 22:43:01.804739  254979 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.806803  254979 out.go:203] 
	W0919 22:43:01.808013  254979 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.808027  254979 out.go:285] * 
	W0919 22:43:01.810171  254979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:43:01.811468  254979 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Setting cgroupDriver systemd"
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 19 22:34:36 ha-434755 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-v7khr_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3041d5d93037c86c3cfadae837272511c922a063939621dadb3263b72427c10/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a6b58aa00fb3ed47c31437427373513e3cf158ba0f49315f653ed171815d1ae/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee54e9ddf31eb43f3d1b92eb3fba3f59792644b4cca713389d08f8df0ca678ef/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3d21bfdf988a075c914dace11f808a9b5349ae9667593ff7a4af4b2c491050a8/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd64b2298ea2e14f8a79f2ef7cbc281f0a4cc54d3c5b88870d2317cf4e796496/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:38 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:34:38 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/474504d27788a62fc731085b07e40bfd02db95b0dee6eb9f01e76872ac1b4613/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0571a9b22aa8dba90ce65f75de015c275de4f02c9b11d07445117722c8bd5410/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/16320e14d7e184563d15b2804dbf3e9612c480a8dcb1c6db031a96760d11777b/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bcc3d90f1ae423c076bac3bff5068dc970a3e0231e8ff9693d1501842df84ab/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8d662a6a0cce0d2a16826cebfb1f342627aa7c367df671adf5932fdf952bcb33/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11b728526ee593e5f0a5d07ce40d5d8d85f6444e5024cf0803eda48dfdeacbbd/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:35:17 ha-434755 dockerd[809]: time="2025-09-19T22:35:17.095642158Z" level=info msg="ignoring event" container=9f3583c0285479d52f54ce342fa39a2bf968d32dd01c6fa37ed4e82770c0069a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:35:27 ha-434755 dockerd[809]: time="2025-09-19T22:35:27.740317296Z" level=info msg="ignoring event" container=e18b45e159c1182e66b623c3d7b119a97e0abd68eb463ffb6cf7841ae7b09580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd2048728598c       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       3                   5bcc3d90f1ae4       storage-provisioner
	f2e4587626b5c       765655ea60781                                                                                         7 minutes ago       Running             kube-vip                  1                   3d21bfdf988a0       kube-vip-ha-434755
	e18b45e159c11       6e38f40d628db                                                                                         8 minutes ago       Exited              storage-provisioner       2                   5bcc3d90f1ae4       storage-provisioner
	c9a94a8bca16c       409467f978b4a                                                                                         8 minutes ago       Running             kindnet-cni               1                   11b728526ee59       kindnet-djvx4
	9a99065ed6ffc       8c811b4aec35f                                                                                         8 minutes ago       Running             busybox                   1                   8d662a6a0cce0       busybox-7b57f96db7-v7khr
	d61ae6148e697       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   3                   16320e14d7e18       coredns-66bc5c9577-w8trg
	54785bb274bdd       df0860106674d                                                                                         8 minutes ago       Running             kube-proxy                1                   474504d27788a       kube-proxy-gzpg8
	ad8e40cf82bf1       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   3                   0571a9b22aa8d       coredns-66bc5c9577-4lmln
	af499a9e8d13a       5f1f5298c888d                                                                                         8 minutes ago       Running             etcd                      1                   e3041d5d93037       etcd-ha-434755
	9f3583c028547       765655ea60781                                                                                         8 minutes ago       Exited              kube-vip                  0                   3d21bfdf988a0       kube-vip-ha-434755
	53ac6087206b0       46169d968e920                                                                                         8 minutes ago       Running             kube-scheduler            1                   bd64b2298ea2e       kube-scheduler-ha-434755
	379f8eb19bc07       a0af72f2ec6d6                                                                                         8 minutes ago       Running             kube-controller-manager   1                   ee54e9ddf31eb       kube-controller-manager-ha-434755
	deaf26f878611       90550c43ad2bc                                                                                         8 minutes ago       Running             kube-apiserver            1                   0a6b58aa00fb3       kube-apiserver-ha-434755
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Exited              busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	276fb29221693       52546a367cc9e                                                                                         17 minutes ago      Exited              coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         17 minutes ago      Exited              coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              18 minutes ago      Exited              kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         18 minutes ago      Exited              kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	baeef3d333816       90550c43ad2bc                                                                                         18 minutes ago      Exited              kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         18 minutes ago      Exited              etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         18 minutes ago      Exited              kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         18 minutes ago      Exited              kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ad8e40cf82bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54656 - 31900 "HINFO IN 352629652807927435.4937880101774792236. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027954607s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [d61ae6148e69] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33352 - 30613 "HINFO IN 7566855018603772192.7692448748435092535. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034224338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:42:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 77a4720958d84b7eaaec886ee550a10f
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m1s                   kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     18m                    kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    18m                    kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 18m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                    kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           18m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           9m27s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s (x7 over 8m26s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m3s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           7m                     node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:43:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 547644a749674c618fb4cf640be170c7
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m38s                  kube-proxy       
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m27s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s (x7 over 8m25s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m3s                   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeNotReady             7m13s                  node-controller  Node ha-434755-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           7m                     node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           6m25s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	Name:               ha-434755-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:43:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:39:23 +0000   Fri, 19 Sep 2025 22:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:39:23 +0000   Fri, 19 Sep 2025 22:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:39:23 +0000   Fri, 19 Sep 2025 22:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:39:23 +0000   Fri, 19 Sep 2025 22:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-434755-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 f71249e482914776bad70f7492069d0d
	  System UUID:                d750116b-8986-4d1b-a4c8-19720c8ed559
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c67nh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-434755-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-jrkrv                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-434755-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-434755-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-dzrbh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-434755-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-434755-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  RegisteredNode           17m                  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode           17m                  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode           17m                  node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode           9m27s                node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode           8m3s                 node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  NodeNotReady             7m13s                node-controller  Node ha-434755-m03 status is now: NodeNotReady
	  Normal  Starting                 7m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m6s (x8 over 7m6s)  kubelet          Node ha-434755-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m6s (x8 over 7m6s)  kubelet          Node ha-434755-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m6s (x7 over 7m6s)  kubelet          Node ha-434755-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m                   node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode           6m25s                node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	  Normal  RegisteredNode           5m54s                node-controller  Node ha-434755-m03 event: Registered Node ha-434755-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [af499a9e8d13] <==
	{"level":"info","ts":"2025-09-19T22:35:58.326015Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:35:58.326081Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:35:58.326117Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:35:58.328349Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:35:58.328390Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:35:58.342694Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:35:58.342804Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:36:30.068450Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:36:30.068527Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:36:30.072057Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"6088e2429f689fd8","error":"failed to dial 6088e2429f689fd8 on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2025-09-19T22:36:30.242921Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:36:32.449550Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"6088e2429f689fd8","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:32.449611Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"6088e2429f689fd8","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:34.596997Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:36:36.451120Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"6088e2429f689fd8","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:36.451176Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"6088e2429f689fd8","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:40.452696Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"6088e2429f689fd8","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:40.452756Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"6088e2429f689fd8","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-09-19T22:36:41.206018Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:36:41.206071Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.206107Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.206610Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:36:41.206642Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.217881Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.217883Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:34:25.770918Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770951Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770958Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:34:25.770961Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.770964Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"error","ts":"2025-09-19T22:34:25.770967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.770983Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771005Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771048Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771078Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771112Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771119Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771126Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771158Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771178Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771533Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771565Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771593Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771605Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.773232Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-19T22:34:25.773292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.773326Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-19T22:34:25.773340Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-434755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:43:03 up  1:25,  0 users,  load average: 1.49, 1.55, 12.58
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:33:33.792856       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:43.793581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:43.793641       1 main.go:301] handling current node
	I0919 22:33:43.793662       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:43.793669       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:43.793876       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:43.793892       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:53.797667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:53.797706       1 main.go:301] handling current node
	I0919 22:33:53.797728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:53.797735       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:53.797927       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:53.797943       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:34:03.791573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:03.791611       1 main.go:301] handling current node
	I0919 22:34:03.791641       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:34:03.791648       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:34:03.791853       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:34:03.791867       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:34:13.793236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:13.793265       1 main.go:301] handling current node
	I0919 22:34:13.793295       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:34:13.793300       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:34:13.793467       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:34:13.793476       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [c9a94a8bca16] <==
	I0919 22:42:18.399148       1 main.go:301] handling current node
	I0919 22:42:28.398186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:28.398218       1 main.go:301] handling current node
	I0919 22:42:28.398234       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:28.398238       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:28.398404       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:28.398413       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:38.398823       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:38.398878       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:38.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:38.399120       1 main.go:301] handling current node
	I0919 22:42:38.399136       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:38.399142       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398754       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:48.398788       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398985       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:48.399000       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:48.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:48.399114       1 main.go:301] handling current node
	I0919 22:42:58.397777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:58.397818       1 main.go:301] handling current node
	I0919 22:42:58.397838       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:58.397844       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:58.398040       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:58.398053       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [baeef3d33381] <==
	W0919 22:34:28.088519       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.091813       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.098214       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.136852       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.144149       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.260258       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.261581       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.262865       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.267338       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.271648       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.310107       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.353280       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0919 22:34:28.398855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:34:28.418582       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.455050       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.495310       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.523204       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.552947       1 logging.go:55] [core] [Channel #11 SubChannel #13]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.598893       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.615348       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.668129       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.682280       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.690932       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.713514       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.755606       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [deaf26f87861] <==
	W0919 22:35:45.088376       1 cacher.go:182] Terminating all watchers from cacher clusterroles.rbac.authorization.k8s.io
	W0919 22:35:45.088419       1 cacher.go:182] Terminating all watchers from cacher leases.coordination.k8s.io
	W0919 22:35:45.088450       1 cacher.go:182] Terminating all watchers from cacher limitranges
	W0919 22:35:45.088575       1 cacher.go:182] Terminating all watchers from cacher namespaces
	W0919 22:35:45.088601       1 cacher.go:182] Terminating all watchers from cacher poddisruptionbudgets.policy
	W0919 22:35:45.088638       1 cacher.go:182] Terminating all watchers from cacher customresourcedefinitions.apiextensions.k8s.io
	W0919 22:35:45.087060       1 cacher.go:182] Terminating all watchers from cacher podtemplates
	W0919 22:35:45.087171       1 cacher.go:182] Terminating all watchers from cacher validatingwebhookconfigurations.admissionregistration.k8s.io
	W0919 22:35:45.088937       1 cacher.go:182] Terminating all watchers from cacher horizontalpodautoscalers.autoscaling
	W0919 22:35:45.088939       1 cacher.go:182] Terminating all watchers from cacher controllerrevisions.apps
	I0919 22:35:45.947836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:35:50.477780       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:35:57.503906       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:13.278842       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:00.219288       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:30.363569       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:02.702552       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:45.378514       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:20.466062       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:54.026196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:40:30.227200       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:01.678001       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:35.549736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:22.195913       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:01.261730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [379f8eb19bc0] <==
	I0919 22:35:00.446686       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:35:00.448277       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:35:00.468548       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:35:00.470805       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:35:00.473226       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:35:00.473248       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:35:00.473274       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:35:00.473273       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:35:00.473294       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:35:00.473349       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:35:00.473933       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:35:00.473968       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:35:00.477672       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 22:35:00.477725       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 22:35:00.477771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 22:35:00.477781       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 22:35:00.477781       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:35:00.477788       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 22:35:00.486920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:35:00.489123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 22:35:00.491334       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:35:00.493617       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:35:00.495803       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:35:00.498093       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:35:00.499331       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [54785bb274bd] <==
	I0919 22:34:57.761058       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:34:57.833193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:35:00.913912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-434755&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0919 22:35:01.834138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:35:01.834169       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:35:01.834256       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:35:01.855270       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:35:01.855328       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:35:01.860764       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:35:01.861199       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:35:01.861231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:35:01.862567       1 config.go:200] "Starting service config controller"
	I0919 22:35:01.862599       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:35:01.862627       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:35:01.862658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:35:01.862680       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:35:01.862685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:35:01.862736       1 config.go:309] "Starting node config controller"
	I0919 22:35:01.863095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:35:01.863114       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:35:01.963632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:35:01.963649       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:35:01.963870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	I0919 22:34:18.774597       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:34:18.774662       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:34:18.774692       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:34:18.774767       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:18.774826       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:34:18.774850       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [53ac6087206b] <==
	I0919 22:34:38.691784       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:34:49.254859       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0919 22:34:49.254890       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:34:49.254896       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:34:56.962003       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:34:56.962030       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:34:56.963821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.963864       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.964116       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:34:56.964511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:34:57.064621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:40:57 ha-434755 kubelet[1340]: E0919 22:40:57.274738    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757406 maxSize=10485760
	Sep 19 22:41:07 ha-434755 kubelet[1340]: E0919 22:41:07.280838    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:07 ha-434755 kubelet[1340]: E0919 22:41:07.280938    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:17 ha-434755 kubelet[1340]: E0919 22:41:17.285674    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:17 ha-434755 kubelet[1340]: E0919 22:41:17.285783    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:27 ha-434755 kubelet[1340]: E0919 22:41:27.289035    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:27 ha-434755 kubelet[1340]: E0919 22:41:27.289121    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:37 ha-434755 kubelet[1340]: E0919 22:41:37.296179    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:37 ha-434755 kubelet[1340]: E0919 22:41:37.296280    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:41:47 ha-434755 kubelet[1340]: E0919 22:41:47.299156    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:47 ha-434755 kubelet[1340]: E0919 22:41:47.299257    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:41:57 ha-434755 kubelet[1340]: E0919 22:41:57.303655    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:57 ha-434755 kubelet[1340]: E0919 22:41:57.303736    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:07 ha-434755 kubelet[1340]: E0919 22:42:07.307724    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:07 ha-434755 kubelet[1340]: E0919 22:42:07.308098    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:17 ha-434755 kubelet[1340]: E0919 22:42:17.317113    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:17 ha-434755 kubelet[1340]: E0919 22:42:17.317223    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:27 ha-434755 kubelet[1340]: E0919 22:42:27.320642    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:27 ha-434755 kubelet[1340]: E0919 22:42:27.320728    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:37 ha-434755 kubelet[1340]: E0919 22:42:37.327066    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:37 ha-434755 kubelet[1340]: E0919 22:42:37.327175    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:47 ha-434755 kubelet[1340]: E0919 22:42:47.333029    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:47 ha-434755 kubelet[1340]: E0919 22:42:47.333130    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:57 ha-434755 kubelet[1340]: E0919 22:42:57.335444    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:57 ha-434755 kubelet[1340]: E0919 22:42:57.335565    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (547.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 node delete m03 --alsologtostderr -v 5: (8.467125436s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (500.067049ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:43:12.576043  303413 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:43:12.576328  303413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:12.576340  303413 out.go:374] Setting ErrFile to fd 2...
	I0919 22:43:12.576346  303413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:12.576549  303413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:43:12.576723  303413 out.go:368] Setting JSON to false
	I0919 22:43:12.576747  303413 mustload.go:65] Loading cluster: ha-434755
	I0919 22:43:12.576871  303413 notify.go:220] Checking for updates...
	I0919 22:43:12.577171  303413 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:12.577197  303413 status.go:174] checking status of ha-434755 ...
	I0919 22:43:12.577677  303413 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:43:12.595944  303413 status.go:371] ha-434755 host status = "Running" (err=<nil>)
	I0919 22:43:12.595982  303413 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:43:12.596258  303413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:12.613169  303413 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:43:12.613395  303413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:12.613433  303413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:12.630269  303413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:12.722255  303413 ssh_runner.go:195] Run: systemctl --version
	I0919 22:43:12.726804  303413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:43:12.737997  303413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:43:12.794276  303413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:43:12.783608647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:43:12.794907  303413 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:43:12.794941  303413 api_server.go:166] Checking apiserver status ...
	I0919 22:43:12.794975  303413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:43:12.807351  303413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1917/cgroup
	W0919 22:43:12.816989  303413 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1917/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:43:12.817032  303413 ssh_runner.go:195] Run: ls
	I0919 22:43:12.820825  303413 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:43:12.825225  303413 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:43:12.825243  303413 status.go:463] ha-434755 apiserver status = Running (err=<nil>)
	I0919 22:43:12.825254  303413 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:43:12.825276  303413 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:43:12.825586  303413 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:43:12.843184  303413 status.go:371] ha-434755-m02 host status = "Running" (err=<nil>)
	I0919 22:43:12.843203  303413 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:43:12.843439  303413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:12.860071  303413 host.go:66] Checking if "ha-434755-m02" exists ...
	I0919 22:43:12.860322  303413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:12.860357  303413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:12.879355  303413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:12.971355  303413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:43:12.983020  303413 kubeconfig.go:125] found "ha-434755" server: "https://192.168.49.254:8443"
	I0919 22:43:12.983045  303413 api_server.go:166] Checking apiserver status ...
	I0919 22:43:12.983075  303413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:43:12.993952  303413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4280/cgroup
	W0919 22:43:13.003185  303413 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4280/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:43:13.003226  303413 ssh_runner.go:195] Run: ls
	I0919 22:43:13.006975  303413 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:43:13.011224  303413 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:43:13.011250  303413 status.go:463] ha-434755-m02 apiserver status = Running (err=<nil>)
	I0919 22:43:13.011261  303413 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:43:13.011281  303413 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:43:13.011571  303413 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:43:13.028283  303413 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:43:13.028300  303413 status.go:384] host is not running, skipping remaining checks
	I0919 22:43:13.028306  303413 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:34:29.615072967Z",
	            "FinishedAt": "2025-09-19T22:34:29.008814579Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74329b990d9dce1255e17e62df25a8a9f852fdd2c0a3169e4fe5efa476dd74f4",
	            "SandboxKey": "/var/run/docker/netns/74329b990d9d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:d1:ee:b6:45:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "d75b4c607beec906838273796c0d4d2073838732be19fc5120b629f9aef39297",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.089320666s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ node    │ ha-434755 node start m02 --alsologtostderr -v 5                                                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ stop    │ ha-434755 stop --alsologtostderr -v 5                                                                                               │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:34 UTC │
	│ start   │ ha-434755 start --wait true --alsologtostderr -v 5                                                                                  │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:34 UTC │                     │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │                     │
	│ node    │ ha-434755 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │ 19 Sep 25 22:43 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:34:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:34:29.392603  254979 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:34:29.392715  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392724  254979 out.go:374] Setting ErrFile to fd 2...
	I0919 22:34:29.392729  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392941  254979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:34:29.393348  254979 out.go:368] Setting JSON to false
	I0919 22:34:29.394260  254979 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4605,"bootTime":1758316664,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:34:29.394355  254979 start.go:140] virtualization: kvm guest
	I0919 22:34:29.396091  254979 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:34:29.397369  254979 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:34:29.397371  254979 notify.go:220] Checking for updates...
	I0919 22:34:29.399394  254979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:34:29.400491  254979 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:29.401460  254979 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:34:29.402392  254979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:34:29.403394  254979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:34:29.404817  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:29.404928  254979 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:34:29.428811  254979 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:34:29.428942  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.487899  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.477486939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.488017  254979 docker.go:318] overlay module found
	I0919 22:34:29.489668  254979 out.go:179] * Using the docker driver based on existing profile
	I0919 22:34:29.490789  254979 start.go:304] selected driver: docker
	I0919 22:34:29.490803  254979 start.go:918] validating driver "docker" against &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.490958  254979 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:34:29.491069  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.548618  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.539006546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.549315  254979 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:29.549349  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:29.549417  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:29.549484  254979 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.551223  254979 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:34:29.552360  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:29.553540  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:29.554463  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:29.554533  254979 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:34:29.554548  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:29.554553  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:29.554642  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:29.554659  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:29.554803  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.573612  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:29.573628  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:29.573642  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:29.573663  254979 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:29.573715  254979 start.go:364] duration metric: took 34.414µs to acquireMachinesLock for "ha-434755"
	I0919 22:34:29.573732  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:29.573739  254979 fix.go:54] fixHost starting: 
	I0919 22:34:29.573944  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.590456  254979 fix.go:112] recreateIfNeeded on ha-434755: state=Stopped err=<nil>
	W0919 22:34:29.590478  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:29.592146  254979 out.go:252] * Restarting existing docker container for "ha-434755" ...
	I0919 22:34:29.592198  254979 cli_runner.go:164] Run: docker start ha-434755
	I0919 22:34:29.805688  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.822967  254979 kic.go:430] container "ha-434755" state is running.
	I0919 22:34:29.823300  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:29.840845  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.841033  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:29.841096  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:29.858584  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:29.858850  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:29.858861  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:29.859537  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44758->127.0.0.1:32813: read: connection reset by peer
	I0919 22:34:32.994537  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:32.994564  254979 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:34:32.994618  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.011712  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.011959  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.011976  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:34:33.156752  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:33.156836  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.173652  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.173873  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.173889  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:33.306488  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:33.306532  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:33.306552  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:33.306560  254979 provision.go:84] configureAuth start
	I0919 22:34:33.306606  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:33.323565  254979 provision.go:143] copyHostCerts
	I0919 22:34:33.323598  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323624  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:33.323639  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323706  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:33.323780  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323798  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:33.323804  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323829  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:33.323869  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323886  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:33.323892  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323914  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:33.323960  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:34:33.559679  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:33.559738  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:33.559789  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.577865  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:33.672478  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:33.672568  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:34:33.696200  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:33.696267  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:33.719990  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:33.720060  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:33.743555  254979 provision.go:87] duration metric: took 436.981146ms to configureAuth
	I0919 22:34:33.743634  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:33.743848  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:33.743893  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.760563  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.760782  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.760794  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:33.894134  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:33.894169  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:33.894578  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:33.894689  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.912104  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.912369  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.912478  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:34.059005  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:34.059094  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.075824  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.076036  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:34.076054  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:34.214294  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.214323  254979 machine.go:96] duration metric: took 4.373275133s to provisionDockerMachine
	I0919 22:34:34.214337  254979 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:34:34.214348  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:34.214400  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:34.214446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.231190  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.326475  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:34.329765  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:34.329812  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:34.329828  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:34.329839  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:34.329853  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:34.329911  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:34.330025  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:34.330042  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:34.330156  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:34.338505  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:34.361549  254979 start.go:296] duration metric: took 147.197651ms for postStartSetup
	I0919 22:34:34.361611  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:34.361647  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.378413  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.469191  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:34.473539  254979 fix.go:56] duration metric: took 4.899792233s for fixHost
	I0919 22:34:34.473566  254979 start.go:83] releasing machines lock for "ha-434755", held for 4.899839715s
	I0919 22:34:34.473629  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:34.489927  254979 ssh_runner.go:195] Run: cat /version.json
	I0919 22:34:34.489970  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.490024  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:34.490090  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.506577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.507908  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.666358  254979 ssh_runner.go:195] Run: systemctl --version
	I0919 22:34:34.670859  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:34.675244  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:34.693880  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:34.693949  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:34.702353  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:34.702375  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.702401  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.702523  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:34.718289  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:34.727659  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:34.736865  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:34.736911  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:34.745995  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.755127  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:34.764124  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.773283  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:34.782430  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:34.791523  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:34.800544  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:34.809524  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:34.817361  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:34.825188  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:34.890049  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:34.960529  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.960584  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.960629  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:34.973026  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:34.983825  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:35.002291  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:35.012972  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:35.023687  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:35.039432  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:35.042752  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:35.050998  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:35.067853  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:35.132842  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:35.196827  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:35.196991  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:35.215146  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:35.225890  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:35.291005  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:34:36.100785  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.112048  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:34:36.122871  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:34:36.134226  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.144968  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:34:36.215570  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:34:36.283944  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.348465  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:34:36.370429  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:34:36.381048  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.447404  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:34:36.520573  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.532578  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:34:36.532632  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:34:36.536280  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.536339  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.539490  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.573579  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:34:36.573643  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.597609  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.624028  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:34:36.624105  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:36.640631  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:36.644560  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.656165  254979 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:34:36.656309  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:36.656354  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.677616  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.677637  254979 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:34:36.677692  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.698524  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.698549  254979 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:34:36.698563  254979 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:34:36.698688  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:36.698756  254979 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:34:36.750118  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:36.750142  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:36.750153  254979 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:34:36.750179  254979 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:34:36.750289  254979 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:34:36.750306  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:36.750341  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:36.762623  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:36.762741  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:36.762799  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:36.771904  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:36.771964  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:34:36.780568  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:34:36.798205  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:36.815070  254979 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:34:36.831719  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:36.848409  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:36.851767  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.862730  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.930528  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:36.955755  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:34:36.955780  254979 certs.go:194] generating shared ca certs ...
	I0919 22:34:36.955801  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:36.955964  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:34:36.956015  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:34:36.956028  254979 certs.go:256] generating profile certs ...
	I0919 22:34:36.956149  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:34:36.956184  254979 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837
	I0919 22:34:36.956203  254979 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.093694  254979 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 ...
	I0919 22:34:37.093723  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837: {Name:mkb7dc47ca29d762ecbca001badafbd7a0f63f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093875  254979 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 ...
	I0919 22:34:37.093889  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837: {Name:mkfe1145f49b260387004be5cad78abcf22bf7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093983  254979 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:34:37.094141  254979 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:34:37.094347  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:34:37.094373  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.094399  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.094419  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.094430  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.094444  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.094453  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.094465  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.094477  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.094562  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:34:37.094597  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.094607  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.094630  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.094660  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.094692  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.094749  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:37.094791  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.094813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.094829  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.095515  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.127336  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:34:37.150544  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.175327  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:34:37.201819  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:34:37.225372  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.248103  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.271531  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:34:37.294329  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:34:37.316902  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.340094  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:34:37.363279  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:34:37.380576  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.385767  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:34:37.394806  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398055  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398106  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.404576  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.412913  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:34:37.421966  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425379  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425442  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.432256  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.440776  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.449890  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453164  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453215  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.459800  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.468138  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.471431  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:34:37.477659  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:34:37.484148  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:34:37.491177  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:34:37.499070  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:34:37.506362  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:34:37.513842  254979 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:37.513988  254979 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:34:37.537542  254979 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:34:37.549913  254979 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:34:37.549939  254979 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:34:37.550009  254979 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:34:37.564566  254979 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.565106  254979 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-434755" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.565386  254979 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "ha-434755" cluster setting kubeconfig missing "ha-434755" context setting]
	I0919 22:34:37.565797  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.566562  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:34:37.567054  254979 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:34:37.567076  254979 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:34:37.567082  254979 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:34:37.567086  254979 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:34:37.567090  254979 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:34:37.567448  254979 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:34:37.567566  254979 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:34:37.580682  254979 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:34:37.580712  254979 kubeadm.go:593] duration metric: took 30.755549ms to restartPrimaryControlPlane
	I0919 22:34:37.580721  254979 kubeadm.go:394] duration metric: took 66.889653ms to StartCluster
	I0919 22:34:37.580737  254979 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.580803  254979 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.581391  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.581643  254979 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:34:37.581673  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:34:37.581681  254979 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:34:37.582003  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.584304  254979 out.go:179] * Enabled addons: 
	I0919 22:34:37.585620  254979 addons.go:514] duration metric: took 3.930682ms for enable addons: enabled=[]
	I0919 22:34:37.585668  254979 start.go:246] waiting for cluster config update ...
	I0919 22:34:37.585686  254979 start.go:255] writing updated cluster config ...
	I0919 22:34:37.587067  254979 out.go:203] 
	I0919 22:34:37.588682  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.588844  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.590451  254979 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:34:37.591363  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:37.592305  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:37.593270  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:37.593292  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:37.593367  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:37.593388  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:37.593398  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:37.593538  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.620137  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:37.620160  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:37.620173  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:37.620210  254979 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:37.620263  254979 start.go:364] duration metric: took 34.403µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:34:37.620280  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:37.620286  254979 fix.go:54] fixHost starting: m02
	I0919 22:34:37.620582  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.644601  254979 fix.go:112] recreateIfNeeded on ha-434755-m02: state=Stopped err=<nil>
	W0919 22:34:37.644633  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:37.645946  254979 out.go:252] * Restarting existing docker container for "ha-434755-m02" ...
	I0919 22:34:37.646038  254979 cli_runner.go:164] Run: docker start ha-434755-m02
	I0919 22:34:37.949352  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.973649  254979 kic.go:430] container "ha-434755-m02" state is running.
	I0919 22:34:37.974176  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:37.994068  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.994337  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:37.994397  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:38.015752  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:38.016073  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:38.016093  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:38.016827  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42006->127.0.0.1:32818: read: connection reset by peer
	I0919 22:34:41.154622  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.154651  254979 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:34:41.154707  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.173029  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.173245  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.173258  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:34:41.323523  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.323600  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.341537  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.341755  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.341772  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:41.477673  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:41.477715  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:41.477735  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:41.477745  254979 provision.go:84] configureAuth start
	I0919 22:34:41.477795  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:41.495782  254979 provision.go:143] copyHostCerts
	I0919 22:34:41.495828  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495863  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:41.495875  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495952  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:41.496051  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496089  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:41.496098  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496141  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:41.496218  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496251  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:41.496261  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496301  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:41.496386  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:34:41.732873  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:41.732963  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:41.733012  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.750783  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:41.848595  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:41.848667  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:41.873665  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:41.873730  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:41.897993  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:41.898059  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:41.922977  254979 provision.go:87] duration metric: took 445.218722ms to configureAuth
	I0919 22:34:41.923009  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:41.923260  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:41.923309  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.942404  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.942657  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.942672  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:42.078612  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:42.078647  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:42.078854  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:42.078927  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.096405  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.096645  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.096717  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:42.245231  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:42.245405  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.264515  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.264739  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.264757  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:53.646301  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-19 22:32:30.139641518 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:34:42.242101116 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:34:53.646338  254979 machine.go:96] duration metric: took 15.651988955s to provisionDockerMachine
	I0919 22:34:53.646360  254979 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:34:53.646376  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:53.646456  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:53.646544  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.668809  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.779279  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:53.785219  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:53.785262  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:53.785275  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:53.785285  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:53.785298  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:53.785375  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:53.785594  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:53.785613  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:53.785773  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:53.798199  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:53.832463  254979 start.go:296] duration metric: took 186.083271ms for postStartSetup
	I0919 22:34:53.832621  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:53.832679  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.858619  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.960212  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:53.966312  254979 fix.go:56] duration metric: took 16.34601659s for fixHost
	I0919 22:34:53.966340  254979 start.go:83] releasing machines lock for "ha-434755-m02", held for 16.346069332s
	I0919 22:34:53.966412  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:53.990694  254979 out.go:179] * Found network options:
	I0919 22:34:53.992467  254979 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:34:53.994237  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:53.994289  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:53.994386  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:53.994425  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:53.994439  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.994522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:54.015258  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.015577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.109387  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:54.187526  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:54.187642  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:54.196971  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:54.196996  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.197029  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.197147  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.213126  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:54.222913  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:54.232770  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:54.232827  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:54.242273  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.252123  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:54.261682  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.271056  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:54.279900  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:54.289084  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:54.298339  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:54.307617  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:54.315730  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:54.323734  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:54.421356  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:54.553517  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.553570  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.553663  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:54.567589  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.578657  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:54.598306  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.610176  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:54.621475  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.637463  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:54.640827  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:54.649159  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:54.666320  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:54.793386  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:54.888125  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:54.888175  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:54.907425  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:54.918281  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:55.016695  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:35:12.030390  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.013654873s)
	I0919 22:35:12.030485  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:35:12.046005  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:35:12.062445  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:35:12.090262  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.103570  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:35:12.186633  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:35:12.276082  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.351919  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:35:12.379448  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:35:12.392643  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.476410  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:35:12.559621  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.572526  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:35:12.572588  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:35:12.576491  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:35:12.576564  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:35:12.579932  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:35:12.614468  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:35:12.614551  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.641603  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.668151  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:35:12.669148  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:35:12.670150  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:12.686876  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:35:12.690808  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:12.702422  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:35:12.702695  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:12.702948  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:35:12.719929  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:35:12.720184  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:35:12.720198  254979 certs.go:194] generating shared ca certs ...
	I0919 22:35:12.720233  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:35:12.720391  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:35:12.720481  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:35:12.720510  254979 certs.go:256] generating profile certs ...
	I0919 22:35:12.720610  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:35:12.720697  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.90db4c9c
	I0919 22:35:12.720757  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:35:12.720773  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:35:12.720795  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:35:12.720813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:35:12.720830  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:35:12.720847  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:35:12.720866  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:35:12.720884  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:35:12.720902  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:35:12.720966  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:35:12.721023  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:35:12.721036  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:35:12.721076  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:35:12.721111  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:35:12.721146  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:35:12.721242  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:35:12.721296  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:12.721327  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:35:12.721346  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:35:12.721427  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:35:12.738056  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:35:12.825819  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:35:12.830244  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:35:12.843478  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:35:12.847190  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:35:12.859905  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:35:12.863484  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:35:12.875902  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:35:12.879295  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:35:12.891480  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:35:12.894661  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:35:12.906895  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:35:12.910234  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:35:12.922725  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:35:12.947840  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:35:12.972792  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:35:12.997517  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:35:13.022085  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:35:13.047365  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:35:13.072377  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:35:13.099533  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:35:13.134971  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:35:13.167709  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:35:13.206266  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:35:13.239665  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:35:13.266921  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:35:13.294118  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:35:13.321828  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:35:13.343786  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:35:13.366845  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:35:13.389708  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:35:13.412481  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:35:13.419706  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:35:13.431765  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436337  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436418  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.444550  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:35:13.455699  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:35:13.468242  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472223  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472279  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.480857  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:35:13.491084  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:35:13.501753  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505877  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505933  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.512774  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:35:13.522847  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:35:13.526705  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:35:13.533354  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:35:13.540112  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:35:13.546612  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:35:13.553144  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:35:13.560238  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:35:13.568285  254979 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:35:13.568401  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:35:13.568434  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:35:13.568481  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:35:13.580554  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:13.580617  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:35:13.580665  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:35:13.589430  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:35:13.589492  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:35:13.598285  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:35:13.616427  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:35:13.634472  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:35:13.652547  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:35:13.656296  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:13.667861  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.787658  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.800614  254979 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:35:13.800904  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:13.802716  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:35:13.803906  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.907011  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.921258  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:35:13.921345  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:35:13.921671  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196598  254979 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:35:44.196684  254979 node_ready.go:38] duration metric: took 30.274978813s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196715  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:44.196778  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:44.696945  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.197315  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.697715  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.197708  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.697596  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.197741  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.697273  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.197137  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.696833  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.197637  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.696961  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.196947  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.697707  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.197053  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.697638  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.197170  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.697689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.197733  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.696981  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.197207  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.697745  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.197895  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.697086  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.197535  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.209362  254979 api_server.go:72] duration metric: took 42.408698512s to wait for apiserver process to appear ...
	I0919 22:35:56.209386  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:56.209404  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:56.215038  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:56.215908  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:56.215931  254979 api_server.go:131] duration metric: took 6.538723ms to wait for apiserver health ...
	I0919 22:35:56.215940  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:56.222250  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:56.222279  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.222289  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.222294  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.222299  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.222306  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.222311  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.222316  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.222322  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.222328  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.222334  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.222342  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.222348  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.222353  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.222359  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.222373  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222385  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222394  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.222401  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.222409  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.222415  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.222424  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.222432  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.222444  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.222452  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.222459  254979 system_pods.go:74] duration metric: took 6.512304ms to wait for pod list to return data ...
	I0919 22:35:56.222473  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:56.224777  254979 default_sa.go:45] found service account: "default"
	I0919 22:35:56.224800  254979 default_sa.go:55] duration metric: took 2.313413ms for default service account to be created ...
	I0919 22:35:56.224809  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:56.230069  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:56.230095  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.230102  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.230139  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.230151  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.230157  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.230165  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.230173  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.230181  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.230189  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.230194  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.230202  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.230207  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.230215  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.230221  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.230234  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230245  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230256  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.230266  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.230271  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.230279  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.230288  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.230293  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.230301  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.230305  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.230316  254979 system_pods.go:126] duration metric: took 5.500729ms to wait for k8s-apps to be running ...
	I0919 22:35:56.230326  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:56.230378  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:56.242876  254979 system_svc.go:56] duration metric: took 12.542054ms WaitForService to wait for kubelet
	I0919 22:35:56.242903  254979 kubeadm.go:578] duration metric: took 42.442241309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:56.242932  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:56.245954  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.245981  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.245997  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246003  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246012  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246017  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246026  254979 node_conditions.go:105] duration metric: took 3.08778ms to run NodePressure ...
	I0919 22:35:56.246039  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:35:56.246070  254979 start.go:255] writing updated cluster config ...
	I0919 22:35:56.248251  254979 out.go:203] 
	I0919 22:35:56.249459  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:56.249573  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.250931  254979 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:35:56.252085  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:35:56.253026  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:35:56.253903  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:35:56.253926  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:35:56.253965  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:35:56.254039  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:35:56.254055  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:35:56.254179  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.276167  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:35:56.276192  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:35:56.276216  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:35:56.276247  254979 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:35:56.276314  254979 start.go:364] duration metric: took 46.178µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:35:56.276338  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:35:56.276347  254979 fix.go:54] fixHost starting: m03
	I0919 22:35:56.276613  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.293331  254979 fix.go:112] recreateIfNeeded on ha-434755-m03: state=Stopped err=<nil>
	W0919 22:35:56.293356  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:35:56.294620  254979 out.go:252] * Restarting existing docker container for "ha-434755-m03" ...
	I0919 22:35:56.294682  254979 cli_runner.go:164] Run: docker start ha-434755-m03
	I0919 22:35:56.544302  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.562451  254979 kic.go:430] container "ha-434755-m03" state is running.
	I0919 22:35:56.562784  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:35:56.581792  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.581992  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:35:56.582050  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:56.600026  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:56.600332  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:56.600350  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:35:56.600929  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44862->127.0.0.1:32823: read: connection reset by peer
	I0919 22:35:59.744345  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.744380  254979 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:35:59.744468  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.762953  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.763211  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.763229  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:35:59.918402  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.918522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.938390  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.938725  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.938751  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:36:00.092594  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:00.092621  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:36:00.092638  254979 ubuntu.go:190] setting up certificates
	I0919 22:36:00.092648  254979 provision.go:84] configureAuth start
	I0919 22:36:00.092699  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:00.111285  254979 provision.go:143] copyHostCerts
	I0919 22:36:00.111330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111368  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:36:00.111377  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111550  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:36:00.111664  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111692  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:36:00.111702  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111734  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:36:00.111789  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111815  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:36:00.111822  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111851  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:36:00.111906  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:36:00.494093  254979 provision.go:177] copyRemoteCerts
	I0919 22:36:00.494184  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:36:00.494248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.515583  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:00.617642  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:36:00.617700  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:36:00.643926  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:36:00.643995  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:36:00.672921  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:36:00.672984  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:36:00.696141  254979 provision.go:87] duration metric: took 603.480386ms to configureAuth
	I0919 22:36:00.696172  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:36:00.696410  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:00.696474  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.713380  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.713659  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.713680  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:36:00.854280  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:36:00.854306  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:36:00.854441  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:36:00.854527  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.877075  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.877355  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.877461  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:36:01.044491  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:36:01.044612  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.068534  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:01.068808  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:01.068828  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:36:01.223884  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:01.223911  254979 machine.go:96] duration metric: took 4.641904945s to provisionDockerMachine
	I0919 22:36:01.223926  254979 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:36:01.223940  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:36:01.224000  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:36:01.224053  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.247249  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.353476  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:36:01.356784  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:36:01.356827  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:36:01.356837  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:36:01.356847  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:36:01.356861  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:36:01.356914  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:36:01.356983  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:36:01.356995  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:36:01.357079  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:36:01.366123  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:01.390127  254979 start.go:296] duration metric: took 166.185556ms for postStartSetup
	I0919 22:36:01.390194  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:01.390248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.407444  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.500338  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:36:01.504828  254979 fix.go:56] duration metric: took 5.228477836s for fixHost
	I0919 22:36:01.504853  254979 start.go:83] releasing machines lock for "ha-434755-m03", held for 5.228525958s
	I0919 22:36:01.504916  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:01.524319  254979 out.go:179] * Found network options:
	I0919 22:36:01.525507  254979 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:36:01.526520  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526544  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526563  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526574  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:36:01.526649  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:36:01.526654  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:36:01.526686  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.526705  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.544526  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.545603  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.637520  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:36:01.728766  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:36:01.728826  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:36:01.738432  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:36:01.738466  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:01.738512  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:01.738626  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:01.755304  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:36:01.764834  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:36:01.774412  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:36:01.774471  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:36:01.783943  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.793341  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:36:01.802524  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.811594  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:36:01.821804  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:36:01.831556  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:36:01.840844  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:36:01.850193  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:36:01.858696  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:36:01.866797  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:01.986845  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:36:02.197731  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:02.197787  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:02.197844  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:36:02.210890  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.222293  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:36:02.239996  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.251285  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:36:02.262578  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:02.279146  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:36:02.282932  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:36:02.291330  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:36:02.310148  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:36:02.435893  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:36:02.556587  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:36:02.556638  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:36:02.575909  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:36:02.587513  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:02.699861  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:36:33.801843  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.101937915s)
	I0919 22:36:33.801930  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:36:33.818125  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:36:33.834866  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:36:33.856162  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:33.868263  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:36:33.959996  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:36:34.048061  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.129937  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:36:34.153114  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:36:34.164068  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.253067  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:36:34.329305  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:34.341450  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:36:34.341524  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:36:34.345717  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:36:34.345785  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:36:34.349309  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:36:34.384417  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:36:34.384478  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.410290  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.435551  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:36:34.436601  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:36:34.437771  254979 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:36:34.438757  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:36:34.455686  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:36:34.459411  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:34.471099  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:36:34.471369  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:34.471706  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:36:34.488100  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:36:34.488367  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:36:34.488381  254979 certs.go:194] generating shared ca certs ...
	I0919 22:36:34.488395  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:36:34.488553  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:36:34.488618  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:36:34.488633  254979 certs.go:256] generating profile certs ...
	I0919 22:36:34.488734  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:36:34.488804  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:36:34.488858  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:36:34.488871  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:36:34.488892  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:36:34.488912  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:36:34.488929  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:36:34.488945  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:36:34.488961  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:36:34.488983  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:36:34.489000  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:36:34.489057  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:36:34.489095  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:36:34.489107  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:36:34.489136  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:36:34.489176  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:36:34.489207  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:36:34.489261  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:34.489295  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:34.489311  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:36:34.489330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:36:34.489388  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:36:34.506474  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:36:34.592737  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:36:34.596550  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:36:34.609026  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:36:34.612572  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:36:34.624601  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:36:34.627756  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:36:34.639526  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:36:34.642628  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:36:34.654080  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:36:34.657248  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:36:34.668694  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:36:34.671921  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:36:34.683466  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:36:34.706717  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:36:34.729514  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:36:34.752135  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:36:34.775534  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:36:34.798386  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:36:34.821220  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:36:34.844089  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:36:34.869124  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:36:34.903928  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:36:34.937896  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:36:34.975415  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:36:35.003119  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:36:35.033569  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:36:35.067233  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:36:35.092336  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:36:35.121987  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:36:35.159147  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:36:35.187449  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:36:35.196710  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:36:35.210371  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215556  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215667  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.226373  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:36:35.242338  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:36:35.257634  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.262962  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.263018  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.272303  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:36:35.284458  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:36:35.297192  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.302970  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.303198  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.312827  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:36:35.325971  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:36:35.330277  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:36:35.340364  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:36:35.350648  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:36:35.360874  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:36:35.371688  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:36:35.380714  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:36:35.389839  254979 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:36:35.389978  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:36:35.390024  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:36:35.390079  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:36:35.406530  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:35.406626  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:36:35.406688  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:36:35.416527  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:36:35.416590  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:36:35.428557  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:36:35.448698  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:36:35.468117  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:36:35.487717  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:36:35.491337  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:35.502239  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.627390  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.641188  254979 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:36:35.641510  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:35.647624  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:36:35.648653  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.764651  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.779233  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:36:35.779307  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:36:35.779583  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782664  254979 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:36:35.782690  254979 node_ready.go:38] duration metric: took 3.089431ms for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782710  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:36:35.782756  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.283749  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.783801  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.283597  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.783305  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.283177  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.783246  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.283742  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.783802  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.283143  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.783619  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.283703  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.783799  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.283102  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.783689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.282927  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.783272  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.283621  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.783685  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.283492  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.783334  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.283701  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.783449  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.283236  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.783314  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.283694  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.783679  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.283688  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.783717  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.797519  254979 api_server.go:72] duration metric: took 14.156281107s to wait for apiserver process to appear ...
	I0919 22:36:49.797549  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:36:49.797570  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:36:49.801827  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:36:49.802688  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:36:49.802713  254979 api_server.go:131] duration metric: took 5.156138ms to wait for apiserver health ...
	I0919 22:36:49.802724  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:36:49.808731  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:36:49.808759  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.808765  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.808769  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.808774  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.808786  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.808797  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.808802  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.808807  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.808815  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.808820  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.808827  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.808832  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.808840  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.808845  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.808851  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.808857  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.808866  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.808877  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.808886  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.808890  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.808898  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.808903  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.808910  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.808914  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.808924  254979 system_pods.go:74] duration metric: took 6.193414ms to wait for pod list to return data ...
	I0919 22:36:49.808934  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:36:49.811398  254979 default_sa.go:45] found service account: "default"
	I0919 22:36:49.811416  254979 default_sa.go:55] duration metric: took 2.472816ms for default service account to be created ...
	I0919 22:36:49.811424  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:36:49.816515  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:36:49.816539  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.816545  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.816549  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.816553  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.816557  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.816560  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.816563  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.816566  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.816570  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.816573  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.816579  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.816583  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.816586  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.816590  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.816593  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.816600  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.816608  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.816614  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.816617  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.816620  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.816624  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.816627  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.816630  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.816632  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.816638  254979 system_pods.go:126] duration metric: took 5.209961ms to wait for k8s-apps to be running ...
	I0919 22:36:49.816646  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:36:49.816685  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:49.829643  254979 system_svc.go:56] duration metric: took 12.988959ms WaitForService to wait for kubelet
	I0919 22:36:49.829668  254979 kubeadm.go:578] duration metric: took 14.188435808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:36:49.829689  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:36:49.832790  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832809  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832821  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832826  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832831  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832839  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832844  254979 node_conditions.go:105] duration metric: took 3.149763ms to run NodePressure ...
	I0919 22:36:49.832857  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:36:49.832880  254979 start.go:255] writing updated cluster config ...
	I0919 22:36:49.834545  254979 out.go:203] 
	I0919 22:36:49.835774  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:49.835888  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.837288  254979 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:36:49.838260  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:36:49.839218  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:36:49.840185  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:36:49.840202  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:36:49.840217  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:36:49.840288  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:36:49.840299  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:36:49.840387  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.860086  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:36:49.860107  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:36:49.860127  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:36:49.860154  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:36:49.860216  254979 start.go:364] duration metric: took 42.254µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:36:49.860236  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:36:49.860245  254979 fix.go:54] fixHost starting: m04
	I0919 22:36:49.860537  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:49.877660  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:36:49.877688  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:36:49.879872  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:36:49.879927  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:36:50.108344  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:50.127577  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:36:50.127896  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:36:50.145596  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:50.145849  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:36:50.145921  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:36:50.163888  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:50.164152  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:36:50.164171  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:36:50.164828  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56462->127.0.0.1:32828: read: connection reset by peer
	I0919 22:36:53.166776  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:56.168046  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:59.169790  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:02.171741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:05.172828  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:08.173440  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:11.174724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:14.176746  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:17.178760  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:20.179240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:23.181529  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:26.182690  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:29.183750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:32.185732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:35.186818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:38.187492  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:41.188831  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:44.189595  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:47.191778  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:50.192786  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:53.193740  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:56.194732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:59.195773  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:02.197710  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:05.198608  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:08.199769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:11.200694  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:14.201718  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:17.203754  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:20.204819  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:23.207054  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:26.207724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:29.208708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:32.210377  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:35.211423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:38.212678  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:41.213761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:44.216005  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:47.217723  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:50.218834  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:53.220905  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:56.221494  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:59.222787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:02.224748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:05.225885  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:08.226688  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:11.228737  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:14.230719  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:17.232761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.233716  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.234909  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.236732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.237733  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.239782  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.240787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.241853  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.243182  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.245159  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.246728  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.247035  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:50.247075  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:39:50.247172  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.267390  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.267465  254979 machine.go:96] duration metric: took 3m0.121600261s to provisionDockerMachine
	I0919 22:39:50.267561  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:50.267599  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.284438  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.284611  254979 retry.go:31] will retry after 316.809243ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.601960  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.624526  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.624657  254979 retry.go:31] will retry after 330.8195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.956237  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.973928  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.974043  254979 retry.go:31] will retry after 838.035272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.812938  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.833782  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:51.833951  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:51.833974  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.834032  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:51.834079  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.854105  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:51.854225  254979 retry.go:31] will retry after 224.006538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.078741  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.096705  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.096817  254979 retry.go:31] will retry after 423.331741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.520446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.540094  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.540200  254979 retry.go:31] will retry after 355.89061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.896715  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.915594  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.915696  254979 retry.go:31] will retry after 642.935309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.559619  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:53.577650  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:53.577803  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577829  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.577840  254979 fix.go:56] duration metric: took 3m3.717595523s for fixHost
	I0919 22:39:53.577850  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.717623259s
	W0919 22:39:53.577867  254979 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577986  254979 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.578002  254979 start.go:729] Will try again in 5 seconds ...
	I0919 22:39:58.578679  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:58.578811  254979 start.go:364] duration metric: took 67.723µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:39:58.578838  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:58.578849  254979 fix.go:54] fixHost starting: m04
	I0919 22:39:58.579176  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.599096  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:39:58.599126  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:58.600560  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:39:58.600634  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:39:58.859923  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.879236  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:39:58.879668  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:39:58.897236  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:39:58.897463  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:58.897552  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:39:58.918053  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:58.918271  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:39:58.918281  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:58.918874  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38044->127.0.0.1:32833: read: connection reset by peer
	I0919 22:40:01.920959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:04.921476  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:07.922288  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:10.923340  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:13.923844  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:16.925745  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:19.926668  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:22.928799  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:25.930210  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:28.930708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:31.933147  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:34.934423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:37.934726  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:40.935749  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:43.937730  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:46.940224  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:49.940869  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:52.941959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:55.943080  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:58.944241  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:01.945832  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:04.946150  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:07.947240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:10.947732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:13.949692  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:16.951725  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:19.952381  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:22.953741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:25.954706  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:28.955793  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:31.957862  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:34.959138  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:37.960247  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:40.961431  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:43.962702  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:46.964762  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:49.965365  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:52.966748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:55.968435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:58.968992  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:01.970768  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:04.971818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:07.972196  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:10.973355  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:13.974698  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:16.976791  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:19.977362  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:22.979658  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:25.981435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.981739  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.983953  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.984393  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.984732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.985736  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.987769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.989756  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.990750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.991490  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.991855  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.992596  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:58.992632  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:42:58.992719  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.013746  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.013831  254979 machine.go:96] duration metric: took 3m0.116353121s to provisionDockerMachine
	I0919 22:42:59.013918  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:59.013953  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.033883  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.033989  254979 retry.go:31] will retry after 316.823283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.351622  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.370204  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.370320  254979 retry.go:31] will retry after 311.292492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.682751  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.702069  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.702202  254979 retry.go:31] will retry after 591.889704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.294731  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.313949  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:00.314105  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:00.314125  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.314184  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:00.314230  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.331741  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.331862  254979 retry.go:31] will retry after 207.410605ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.540373  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.558832  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.558943  254979 retry.go:31] will retry after 400.484554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.960435  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.980834  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.980981  254979 retry.go:31] will retry after 805.175329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.786666  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:01.804452  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:01.804589  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.804609  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.804626  254979 fix.go:56] duration metric: took 3m3.225778678s for fixHost
	I0919 22:43:01.804633  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.225810313s
	W0919 22:43:01.804739  254979 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.806803  254979 out.go:203] 
	W0919 22:43:01.808013  254979 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.808027  254979 out.go:285] * 
	W0919 22:43:01.810171  254979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:43:01.811468  254979 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Setting cgroupDriver systemd"
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 19 22:34:36 ha-434755 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-v7khr_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3041d5d93037c86c3cfadae837272511c922a063939621dadb3263b72427c10/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a6b58aa00fb3ed47c31437427373513e3cf158ba0f49315f653ed171815d1ae/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee54e9ddf31eb43f3d1b92eb3fba3f59792644b4cca713389d08f8df0ca678ef/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3d21bfdf988a075c914dace11f808a9b5349ae9667593ff7a4af4b2c491050a8/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd64b2298ea2e14f8a79f2ef7cbc281f0a4cc54d3c5b88870d2317cf4e796496/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:38 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:34:38 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/474504d27788a62fc731085b07e40bfd02db95b0dee6eb9f01e76872ac1b4613/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0571a9b22aa8dba90ce65f75de015c275de4f02c9b11d07445117722c8bd5410/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/16320e14d7e184563d15b2804dbf3e9612c480a8dcb1c6db031a96760d11777b/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bcc3d90f1ae423c076bac3bff5068dc970a3e0231e8ff9693d1501842df84ab/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8d662a6a0cce0d2a16826cebfb1f342627aa7c367df671adf5932fdf952bcb33/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11b728526ee593e5f0a5d07ce40d5d8d85f6444e5024cf0803eda48dfdeacbbd/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:35:17 ha-434755 dockerd[809]: time="2025-09-19T22:35:17.095642158Z" level=info msg="ignoring event" container=9f3583c0285479d52f54ce342fa39a2bf968d32dd01c6fa37ed4e82770c0069a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:35:27 ha-434755 dockerd[809]: time="2025-09-19T22:35:27.740317296Z" level=info msg="ignoring event" container=e18b45e159c1182e66b623c3d7b119a97e0abd68eb463ffb6cf7841ae7b09580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd2048728598c       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       3                   5bcc3d90f1ae4       storage-provisioner
	f2e4587626b5c       765655ea60781                                                                                         7 minutes ago       Running             kube-vip                  1                   3d21bfdf988a0       kube-vip-ha-434755
	e18b45e159c11       6e38f40d628db                                                                                         8 minutes ago       Exited              storage-provisioner       2                   5bcc3d90f1ae4       storage-provisioner
	c9a94a8bca16c       409467f978b4a                                                                                         8 minutes ago       Running             kindnet-cni               1                   11b728526ee59       kindnet-djvx4
	9a99065ed6ffc       8c811b4aec35f                                                                                         8 minutes ago       Running             busybox                   1                   8d662a6a0cce0       busybox-7b57f96db7-v7khr
	d61ae6148e697       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   3                   16320e14d7e18       coredns-66bc5c9577-w8trg
	54785bb274bdd       df0860106674d                                                                                         8 minutes ago       Running             kube-proxy                1                   474504d27788a       kube-proxy-gzpg8
	ad8e40cf82bf1       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   3                   0571a9b22aa8d       coredns-66bc5c9577-4lmln
	af499a9e8d13a       5f1f5298c888d                                                                                         8 minutes ago       Running             etcd                      1                   e3041d5d93037       etcd-ha-434755
	9f3583c028547       765655ea60781                                                                                         8 minutes ago       Exited              kube-vip                  0                   3d21bfdf988a0       kube-vip-ha-434755
	53ac6087206b0       46169d968e920                                                                                         8 minutes ago       Running             kube-scheduler            1                   bd64b2298ea2e       kube-scheduler-ha-434755
	379f8eb19bc07       a0af72f2ec6d6                                                                                         8 minutes ago       Running             kube-controller-manager   1                   ee54e9ddf31eb       kube-controller-manager-ha-434755
	deaf26f878611       90550c43ad2bc                                                                                         8 minutes ago       Running             kube-apiserver            1                   0a6b58aa00fb3       kube-apiserver-ha-434755
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Exited              busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	276fb29221693       52546a367cc9e                                                                                         17 minutes ago      Exited              coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         17 minutes ago      Exited              coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              18 minutes ago      Exited              kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         18 minutes ago      Exited              kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	baeef3d333816       90550c43ad2bc                                                                                         18 minutes ago      Exited              kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         18 minutes ago      Exited              etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         18 minutes ago      Exited              kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         18 minutes ago      Exited              kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ad8e40cf82bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54656 - 31900 "HINFO IN 352629652807927435.4937880101774792236. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027954607s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [d61ae6148e69] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33352 - 30613 "HINFO IN 7566855018603772192.7692448748435092535. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034224338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 77a4720958d84b7eaaec886ee550a10f
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m12s                  kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     18m                    kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    18m                    kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 18m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                    kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           18m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           9m38s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  Starting                 8m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m37s (x8 over 8m37s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x8 over 8m37s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x7 over 8m37s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m14s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m36s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 547644a749674c618fb4cf640be170c7
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m49s                  kube-proxy       
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m38s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  Starting                 8m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m36s (x8 over 8m36s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s (x8 over 8m36s)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s (x7 over 8m36s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m14s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeNotReady             7m24s                  node-controller  Node ha-434755-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           6m36s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [af499a9e8d13] <==
	{"level":"info","ts":"2025-09-19T22:36:41.206107Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.206610Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:36:41.206642Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.217881Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.217883Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.847338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:43:07.862893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:43:07.871243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55900","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:43:07.880286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:43:07.881422Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"6088e2429f689fd8","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:43:07.881466Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.881846Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.881895Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887325Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887376Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887410Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887550Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887588Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"6088e2429f689fd8","error":"failed to read 6088e2429f689fd8 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-19T22:43:07.887615Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887688Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:43:07.887721Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887728Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887742Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.890079Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.891371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46986","server-name":"","error":"read tcp 192.168.49.2:2380->192.168.49.4:46986: read: connection reset by peer"}
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:34:25.770918Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770951Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770958Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:34:25.770961Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.770964Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"error","ts":"2025-09-19T22:34:25.770967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.770983Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771005Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771048Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771078Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771112Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771119Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771126Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771158Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771178Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771533Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771565Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771593Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771605Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.773232Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-19T22:34:25.773292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.773326Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-19T22:34:25.773340Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-434755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:43:14 up  1:25,  0 users,  load average: 1.26, 1.50, 12.44
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:33:33.792856       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:43.793581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:43.793641       1 main.go:301] handling current node
	I0919 22:33:43.793662       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:43.793669       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:43.793876       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:43.793892       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:53.797667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:53.797706       1 main.go:301] handling current node
	I0919 22:33:53.797728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:53.797735       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:53.797927       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:53.797943       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:34:03.791573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:03.791611       1 main.go:301] handling current node
	I0919 22:34:03.791641       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:34:03.791648       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:34:03.791853       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:34:03.791867       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:34:13.793236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:13.793265       1 main.go:301] handling current node
	I0919 22:34:13.793295       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:34:13.793300       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:34:13.793467       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:34:13.793476       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [c9a94a8bca16] <==
	I0919 22:42:28.398413       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:38.398823       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:38.398878       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:38.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:38.399120       1 main.go:301] handling current node
	I0919 22:42:38.399136       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:38.399142       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398754       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:48.398788       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398985       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:48.399000       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:48.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:48.399114       1 main.go:301] handling current node
	I0919 22:42:58.397777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:58.397818       1 main.go:301] handling current node
	I0919 22:42:58.397838       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:58.397844       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:58.398040       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:58.398053       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:43:08.398799       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:43:08.398837       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:43:08.399068       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:43:08.399086       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:43:08.399203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:43:08.399215       1 main.go:301] handling current node
	
	
	==> kube-apiserver [baeef3d33381] <==
	W0919 22:34:28.088519       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.091813       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.098214       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.136852       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.144149       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.260258       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.261581       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.262865       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.267338       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.271648       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.310107       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.353280       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0919 22:34:28.398855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:34:28.418582       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.455050       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.495310       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.523204       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.552947       1 logging.go:55] [core] [Channel #11 SubChannel #13]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.598893       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.615348       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.668129       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.682280       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.690932       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.713514       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.755606       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [deaf26f87861] <==
	W0919 22:35:45.088376       1 cacher.go:182] Terminating all watchers from cacher clusterroles.rbac.authorization.k8s.io
	W0919 22:35:45.088419       1 cacher.go:182] Terminating all watchers from cacher leases.coordination.k8s.io
	W0919 22:35:45.088450       1 cacher.go:182] Terminating all watchers from cacher limitranges
	W0919 22:35:45.088575       1 cacher.go:182] Terminating all watchers from cacher namespaces
	W0919 22:35:45.088601       1 cacher.go:182] Terminating all watchers from cacher poddisruptionbudgets.policy
	W0919 22:35:45.088638       1 cacher.go:182] Terminating all watchers from cacher customresourcedefinitions.apiextensions.k8s.io
	W0919 22:35:45.087060       1 cacher.go:182] Terminating all watchers from cacher podtemplates
	W0919 22:35:45.087171       1 cacher.go:182] Terminating all watchers from cacher validatingwebhookconfigurations.admissionregistration.k8s.io
	W0919 22:35:45.088937       1 cacher.go:182] Terminating all watchers from cacher horizontalpodautoscalers.autoscaling
	W0919 22:35:45.088939       1 cacher.go:182] Terminating all watchers from cacher controllerrevisions.apps
	I0919 22:35:45.947836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:35:50.477780       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:35:57.503906       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:13.278842       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:00.219288       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:30.363569       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:02.702552       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:45.378514       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:20.466062       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:54.026196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:40:30.227200       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:01.678001       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:35.549736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:22.195913       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:01.261730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [379f8eb19bc0] <==
	I0919 22:35:00.446686       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:35:00.448277       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:35:00.468548       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:35:00.470805       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:35:00.473226       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:35:00.473248       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:35:00.473274       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:35:00.473273       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:35:00.473294       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:35:00.473349       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:35:00.473933       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:35:00.473968       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:35:00.477672       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 22:35:00.477725       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 22:35:00.477771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 22:35:00.477781       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 22:35:00.477781       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:35:00.477788       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 22:35:00.486920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:35:00.489123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 22:35:00.491334       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:35:00.493617       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:35:00.495803       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:35:00.498093       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:35:00.499331       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [54785bb274bd] <==
	I0919 22:34:57.761058       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:34:57.833193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:35:00.913912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-434755&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0919 22:35:01.834138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:35:01.834169       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:35:01.834256       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:35:01.855270       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:35:01.855328       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:35:01.860764       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:35:01.861199       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:35:01.861231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:35:01.862567       1 config.go:200] "Starting service config controller"
	I0919 22:35:01.862599       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:35:01.862627       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:35:01.862658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:35:01.862680       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:35:01.862685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:35:01.862736       1 config.go:309] "Starting node config controller"
	I0919 22:35:01.863095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:35:01.863114       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:35:01.963632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:35:01.963649       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:35:01.963870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	I0919 22:34:18.774597       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:34:18.774662       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:34:18.774692       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:34:18.774767       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:18.774826       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:34:18.774850       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [53ac6087206b] <==
	I0919 22:34:38.691784       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:34:49.254859       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0919 22:34:49.254890       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:34:49.254896       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:34:56.962003       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:34:56.962030       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:34:56.963821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.963864       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.964116       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:34:56.964511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:34:57.064621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:41:07 ha-434755 kubelet[1340]: E0919 22:41:07.280938    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:17 ha-434755 kubelet[1340]: E0919 22:41:17.285674    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:17 ha-434755 kubelet[1340]: E0919 22:41:17.285783    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:27 ha-434755 kubelet[1340]: E0919 22:41:27.289035    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:27 ha-434755 kubelet[1340]: E0919 22:41:27.289121    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:37 ha-434755 kubelet[1340]: E0919 22:41:37.296179    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:37 ha-434755 kubelet[1340]: E0919 22:41:37.296280    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:41:47 ha-434755 kubelet[1340]: E0919 22:41:47.299156    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:47 ha-434755 kubelet[1340]: E0919 22:41:47.299257    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:41:57 ha-434755 kubelet[1340]: E0919 22:41:57.303655    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:57 ha-434755 kubelet[1340]: E0919 22:41:57.303736    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:07 ha-434755 kubelet[1340]: E0919 22:42:07.307724    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:07 ha-434755 kubelet[1340]: E0919 22:42:07.308098    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:17 ha-434755 kubelet[1340]: E0919 22:42:17.317113    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:17 ha-434755 kubelet[1340]: E0919 22:42:17.317223    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:27 ha-434755 kubelet[1340]: E0919 22:42:27.320642    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:27 ha-434755 kubelet[1340]: E0919 22:42:27.320728    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:37 ha-434755 kubelet[1340]: E0919 22:42:37.327066    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:37 ha-434755 kubelet[1340]: E0919 22:42:37.327175    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:47 ha-434755 kubelet[1340]: E0919 22:42:47.333029    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:47 ha-434755 kubelet[1340]: E0919 22:42:47.333130    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:57 ha-434755 kubelet[1340]: E0919 22:42:57.335444    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:57 ha-434755 kubelet[1340]: E0919 22:42:57.335565    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:43:07 ha-434755 kubelet[1340]: E0919 22:43:07.338836    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:43:07 ha-434755 kubelet[1340]: E0919 22:43:07.338927    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58758066 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hhbsb
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-434755 describe pod busybox-7b57f96db7-hhbsb
helpers_test.go:290: (dbg) kubectl --context ha-434755 describe pod busybox-7b57f96db7-hhbsb:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hhbsb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwqfz (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rwqfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (10.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-434755" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-434755\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-434755\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-434755\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"re
gistry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetP
ath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:34:29.615072967Z",
	            "FinishedAt": "2025-09-19T22:34:29.008814579Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "74329b990d9dce1255e17e62df25a8a9f852fdd2c0a3169e4fe5efa476dd74f4",
	            "SandboxKey": "/var/run/docker/netns/74329b990d9d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:d1:ee:b6:45:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "d75b4c607beec906838273796c0d4d2073838732be19fc5120b629f9aef39297",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.092970861s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ node    │ ha-434755 node start m02 --alsologtostderr -v 5                                                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ stop    │ ha-434755 stop --alsologtostderr -v 5                                                                                               │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:34 UTC │
	│ start   │ ha-434755 start --wait true --alsologtostderr -v 5                                                                                  │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:34 UTC │                     │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │                     │
	│ node    │ ha-434755 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │ 19 Sep 25 22:43 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:34:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:34:29.392603  254979 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:34:29.392715  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392724  254979 out.go:374] Setting ErrFile to fd 2...
	I0919 22:34:29.392729  254979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:34:29.392941  254979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:34:29.393348  254979 out.go:368] Setting JSON to false
	I0919 22:34:29.394260  254979 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4605,"bootTime":1758316664,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:34:29.394355  254979 start.go:140] virtualization: kvm guest
	I0919 22:34:29.396091  254979 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:34:29.397369  254979 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:34:29.397371  254979 notify.go:220] Checking for updates...
	I0919 22:34:29.399394  254979 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:34:29.400491  254979 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:29.401460  254979 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:34:29.402392  254979 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:34:29.403394  254979 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:34:29.404817  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:29.404928  254979 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:34:29.428811  254979 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:34:29.428942  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.487899  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.477486939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.488017  254979 docker.go:318] overlay module found
	I0919 22:34:29.489668  254979 out.go:179] * Using the docker driver based on existing profile
	I0919 22:34:29.490789  254979 start.go:304] selected driver: docker
	I0919 22:34:29.490803  254979 start.go:918] validating driver "docker" against &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.490958  254979 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:34:29.491069  254979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:34:29.548618  254979 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:34:29.539006546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:34:29.549315  254979 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:29.549349  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:29.549417  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:29.549484  254979 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:f
alse kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:29.551223  254979 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:34:29.552360  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:29.553540  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:29.554463  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:29.554533  254979 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:34:29.554548  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:29.554553  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:29.554642  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:29.554659  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:29.554803  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.573612  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:29.573628  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:29.573642  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:29.573663  254979 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:29.573715  254979 start.go:364] duration metric: took 34.414µs to acquireMachinesLock for "ha-434755"
	I0919 22:34:29.573732  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:29.573739  254979 fix.go:54] fixHost starting: 
	I0919 22:34:29.573944  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.590456  254979 fix.go:112] recreateIfNeeded on ha-434755: state=Stopped err=<nil>
	W0919 22:34:29.590478  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:29.592146  254979 out.go:252] * Restarting existing docker container for "ha-434755" ...
	I0919 22:34:29.592198  254979 cli_runner.go:164] Run: docker start ha-434755
	I0919 22:34:29.805688  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:34:29.822967  254979 kic.go:430] container "ha-434755" state is running.
	I0919 22:34:29.823300  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:29.840845  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:29.841033  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:29.841096  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:29.858584  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:29.858850  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:29.858861  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:29.859537  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44758->127.0.0.1:32813: read: connection reset by peer
	I0919 22:34:32.994537  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:32.994564  254979 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:34:32.994618  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.011712  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.011959  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.011976  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:34:33.156752  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:34:33.156836  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.173652  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.173873  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.173889  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:33.306488  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:33.306532  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:33.306552  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:33.306560  254979 provision.go:84] configureAuth start
	I0919 22:34:33.306606  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:33.323565  254979 provision.go:143] copyHostCerts
	I0919 22:34:33.323598  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323624  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:33.323639  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:33.323706  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:33.323780  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323798  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:33.323804  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:33.323829  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:33.323869  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323886  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:33.323892  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:33.323914  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:33.323960  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:34:33.559679  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:33.559738  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:33.559789  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.577865  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:33.672478  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:33.672568  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:34:33.696200  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:33.696267  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:33.719990  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:33.720060  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:33.743555  254979 provision.go:87] duration metric: took 436.981146ms to configureAuth
	I0919 22:34:33.743634  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:33.743848  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:33.743893  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.760563  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.760782  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.760794  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:33.894134  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:33.894169  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:33.894578  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:33.894689  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:33.912104  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:33.912369  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:33.912478  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:34.059005  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:34.059094  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.075824  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.076036  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:34:34.076054  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:34.214294  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.214323  254979 machine.go:96] duration metric: took 4.373275133s to provisionDockerMachine
	I0919 22:34:34.214337  254979 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:34:34.214348  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:34.214400  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:34.214446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.231190  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.326475  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:34.329765  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:34.329812  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:34.329828  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:34.329839  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:34.329853  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:34.329911  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:34.330025  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:34.330042  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:34.330156  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:34.338505  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:34.361549  254979 start.go:296] duration metric: took 147.197651ms for postStartSetup
	I0919 22:34:34.361611  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:34.361647  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.378413  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.469191  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:34.473539  254979 fix.go:56] duration metric: took 4.899792233s for fixHost
	I0919 22:34:34.473566  254979 start.go:83] releasing machines lock for "ha-434755", held for 4.899839715s
	I0919 22:34:34.473629  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:34:34.489927  254979 ssh_runner.go:195] Run: cat /version.json
	I0919 22:34:34.489970  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.490024  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:34.490090  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:34:34.506577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.507908  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:34:34.666358  254979 ssh_runner.go:195] Run: systemctl --version
	I0919 22:34:34.670859  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:34.675244  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:34.693880  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:34.693949  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:34.702353  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:34.702375  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.702401  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.702523  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:34.718289  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:34.727659  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:34.736865  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:34.736911  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:34.745995  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.755127  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:34.764124  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:34.773283  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:34.782430  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:34.791523  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:34.800544  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:34.809524  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:34.817361  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:34.825188  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:34.890049  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:34.960529  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:34.960584  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:34.960629  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:34.973026  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:34.983825  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:35.002291  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:35.012972  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:35.023687  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:35.039432  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:35.042752  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:35.050998  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:35.067853  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:35.132842  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:35.196827  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:35.196991  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:35.215146  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:35.225890  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:35.291005  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:34:36.100785  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.112048  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:34:36.122871  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:34:36.134226  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.144968  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:34:36.215570  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:34:36.283944  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.348465  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:34:36.370429  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:34:36.381048  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.447404  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:34:36.520573  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:34:36.532578  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:34:36.532632  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:34:36.536280  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.536339  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.539490  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.573579  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:34:36.573643  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.597609  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:34:36.624028  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:34:36.624105  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:36.640631  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:36.644560  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.656165  254979 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:34:36.656309  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:36.656354  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.677616  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.677637  254979 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:34:36.677692  254979 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:34:36.698524  254979 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:34:36.698549  254979 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:34:36.698563  254979 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:34:36.698688  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:36.698756  254979 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:34:36.750118  254979 cni.go:84] Creating CNI manager for ""
	I0919 22:34:36.750142  254979 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:34:36.750153  254979 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:34:36.750179  254979 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:34:36.750289  254979 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:34:36.750306  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:36.750341  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:36.762623  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:36.762741  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:36.762799  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:36.771904  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:36.771964  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:34:36.780568  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:34:36.798205  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:36.815070  254979 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:34:36.831719  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:36.848409  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:36.851767  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:36.862730  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.930528  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:36.955755  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:34:36.955780  254979 certs.go:194] generating shared ca certs ...
	I0919 22:34:36.955801  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:36.955964  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:34:36.956015  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:34:36.956028  254979 certs.go:256] generating profile certs ...
	I0919 22:34:36.956149  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:34:36.956184  254979 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837
	I0919 22:34:36.956203  254979 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.093694  254979 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 ...
	I0919 22:34:37.093723  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837: {Name:mkb7dc47ca29d762ecbca001badafbd7a0f63f6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093875  254979 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 ...
	I0919 22:34:37.093889  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837: {Name:mkfe1145f49b260387004be5cad78abcf22bf7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.093983  254979 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:34:37.094141  254979 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.cbfd4837 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:34:37.094347  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:34:37.094373  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.094399  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.094419  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.094430  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.094444  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.094453  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.094465  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.094477  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.094562  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:34:37.094597  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.094607  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.094630  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.094660  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.094692  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.094749  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:37.094791  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.094813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.094829  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.095515  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.127336  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:34:37.150544  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.175327  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:34:37.201819  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:34:37.225372  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.248103  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.271531  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:34:37.294329  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:34:37.316902  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.340094  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:34:37.363279  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:34:37.380576  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.385767  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:34:37.394806  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398055  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.398106  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:34:37.404576  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.412913  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:34:37.421966  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425379  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.425442  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:34:37.432256  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.440776  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.449890  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453164  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.453215  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.459800  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.468138  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.471431  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:34:37.477659  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:34:37.484148  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:34:37.491177  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:34:37.499070  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:34:37.506362  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:34:37.513842  254979 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:37.513988  254979 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:34:37.537542  254979 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:34:37.549913  254979 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:34:37.549939  254979 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:34:37.550009  254979 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:34:37.564566  254979 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.565106  254979 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-434755" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.565386  254979 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "ha-434755" cluster setting kubeconfig missing "ha-434755" context setting]
	I0919 22:34:37.565797  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.566562  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:34:37.567054  254979 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:34:37.567076  254979 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:34:37.567082  254979 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:34:37.567086  254979 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:34:37.567090  254979 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:34:37.567448  254979 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:34:37.567566  254979 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:34:37.580682  254979 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:34:37.580712  254979 kubeadm.go:593] duration metric: took 30.755549ms to restartPrimaryControlPlane
	I0919 22:34:37.580721  254979 kubeadm.go:394] duration metric: took 66.889653ms to StartCluster
	I0919 22:34:37.580737  254979 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.580803  254979 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:34:37.581391  254979 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.581643  254979 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:34:37.581673  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:34:37.581681  254979 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:34:37.582003  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.584304  254979 out.go:179] * Enabled addons: 
	I0919 22:34:37.585620  254979 addons.go:514] duration metric: took 3.930682ms for enable addons: enabled=[]
	I0919 22:34:37.585668  254979 start.go:246] waiting for cluster config update ...
	I0919 22:34:37.585686  254979 start.go:255] writing updated cluster config ...
	I0919 22:34:37.587067  254979 out.go:203] 
	I0919 22:34:37.588682  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:37.588844  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.590451  254979 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:34:37.591363  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:34:37.592305  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:37.593270  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:34:37.593292  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:37.593367  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:37.593388  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:37.593398  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:34:37.593538  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.620137  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:37.620160  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:37.620173  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:37.620210  254979 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:37.620263  254979 start.go:364] duration metric: took 34.403µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:34:37.620280  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:34:37.620286  254979 fix.go:54] fixHost starting: m02
	I0919 22:34:37.620582  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.644601  254979 fix.go:112] recreateIfNeeded on ha-434755-m02: state=Stopped err=<nil>
	W0919 22:34:37.644633  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:34:37.645946  254979 out.go:252] * Restarting existing docker container for "ha-434755-m02" ...
	I0919 22:34:37.646038  254979 cli_runner.go:164] Run: docker start ha-434755-m02
	I0919 22:34:37.949352  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:34:37.973649  254979 kic.go:430] container "ha-434755-m02" state is running.
	I0919 22:34:37.974176  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:37.994068  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:34:37.994337  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:37.994397  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:38.015752  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:38.016073  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:38.016093  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:38.016827  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42006->127.0.0.1:32818: read: connection reset by peer
	I0919 22:34:41.154622  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.154651  254979 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:34:41.154707  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.173029  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.173245  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.173258  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:34:41.323523  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:34:41.323600  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.341537  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.341755  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.341772  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:41.477673  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:41.477715  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:34:41.477735  254979 ubuntu.go:190] setting up certificates
	I0919 22:34:41.477745  254979 provision.go:84] configureAuth start
	I0919 22:34:41.477795  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:41.495782  254979 provision.go:143] copyHostCerts
	I0919 22:34:41.495828  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495863  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:34:41.495875  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:34:41.495952  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:34:41.496051  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496089  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:34:41.496098  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:34:41.496141  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:34:41.496218  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496251  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:34:41.496261  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:34:41.496301  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:34:41.496386  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:34:41.732873  254979 provision.go:177] copyRemoteCerts
	I0919 22:34:41.732963  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:41.733012  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.750783  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:41.848595  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:41.848667  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:41.873665  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:41.873730  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:41.897993  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:41.898059  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:41.922977  254979 provision.go:87] duration metric: took 445.218722ms to configureAuth
	I0919 22:34:41.923009  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:41.923260  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:34:41.923309  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:41.942404  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:41.942657  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:41.942672  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:34:42.078612  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:34:42.078647  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:34:42.078854  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:34:42.078927  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.096405  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.096645  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.096717  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:34:42.245231  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:34:42.245405  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:42.264515  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:42.264739  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:34:42.264757  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:34:53.646301  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-19 22:32:30.139641518 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 22:34:42.242101116 +0000
	@@ -11,6 +11,7 @@
	 Type=notify
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	 
	 
	 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 22:34:53.646338  254979 machine.go:96] duration metric: took 15.651988955s to provisionDockerMachine
	I0919 22:34:53.646360  254979 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:34:53.646376  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:53.646456  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:53.646544  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.668809  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.779279  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:53.785219  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:53.785262  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:53.785275  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:53.785285  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:53.785298  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:34:53.785375  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:34:53.785594  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:34:53.785613  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:34:53.785773  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:53.798199  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:34:53.832463  254979 start.go:296] duration metric: took 186.083271ms for postStartSetup
	I0919 22:34:53.832621  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:53.832679  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.858619  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:53.960212  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:53.966312  254979 fix.go:56] duration metric: took 16.34601659s for fixHost
	I0919 22:34:53.966340  254979 start.go:83] releasing machines lock for "ha-434755-m02", held for 16.346069332s
	I0919 22:34:53.966412  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:34:53.990694  254979 out.go:179] * Found network options:
	I0919 22:34:53.992467  254979 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:34:53.994237  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:53.994289  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:53.994386  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:53.994425  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:53.994439  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:53.994522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:34:54.015258  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.015577  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:34:54.109387  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:34:54.187526  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:54.187642  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:54.196971  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:34:54.196996  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.197029  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.197147  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.213126  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:34:54.222913  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:34:54.232770  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:34:54.232827  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:34:54.242273  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.252123  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:34:54.261682  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:34:54.271056  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:54.279900  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:34:54.289084  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:34:54.298339  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:34:54.307617  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:54.315730  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:54.323734  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:54.421356  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:34:54.553517  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:34:54.553570  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:54.553663  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:34:54.567589  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.578657  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:54.598306  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:54.610176  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:34:54.621475  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:54.637463  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:34:54.640827  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:34:54.649159  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:34:54.666320  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:34:54.793386  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:34:54.888125  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:34:54.888175  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:34:54.907425  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:34:54.918281  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:55.016695  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:35:12.030390  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.013654873s)
	I0919 22:35:12.030485  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:35:12.046005  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:35:12.062445  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:35:12.090262  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.103570  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:35:12.186633  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:35:12.276082  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.351919  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:35:12.379448  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:35:12.392643  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:12.476410  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:35:12.559621  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:35:12.572526  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:35:12.572588  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:35:12.576491  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:35:12.576564  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:35:12.579932  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:35:12.614468  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:35:12.614551  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.641603  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:35:12.668151  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:35:12.669148  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:35:12.670150  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:12.686876  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:35:12.690808  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:12.702422  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:35:12.702695  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:12.702948  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:35:12.719929  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:35:12.720184  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:35:12.720198  254979 certs.go:194] generating shared ca certs ...
	I0919 22:35:12.720233  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:35:12.720391  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:35:12.720481  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:35:12.720510  254979 certs.go:256] generating profile certs ...
	I0919 22:35:12.720610  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:35:12.720697  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.90db4c9c
	I0919 22:35:12.720757  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:35:12.720773  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:35:12.720795  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:35:12.720813  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:35:12.720830  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:35:12.720847  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:35:12.720866  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:35:12.720884  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:35:12.720902  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:35:12.720966  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:35:12.721023  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:35:12.721036  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:35:12.721076  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:35:12.721111  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:35:12.721146  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:35:12.721242  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:35:12.721296  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:12.721327  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:35:12.721346  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:35:12.721427  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:35:12.738056  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:35:12.825819  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:35:12.830244  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:35:12.843478  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:35:12.847190  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:35:12.859905  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:35:12.863484  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:35:12.875902  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:35:12.879295  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:35:12.891480  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:35:12.894661  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:35:12.906895  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:35:12.910234  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:35:12.922725  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:35:12.947840  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:35:12.972792  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:35:12.997517  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:35:13.022085  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:35:13.047365  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:35:13.072377  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:35:13.099533  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:35:13.134971  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:35:13.167709  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:35:13.206266  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:35:13.239665  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:35:13.266921  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:35:13.294118  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:35:13.321828  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:35:13.343786  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:35:13.366845  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:35:13.389708  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:35:13.412481  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:35:13.419706  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:35:13.431765  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436337  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.436418  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:35:13.444550  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:35:13.455699  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:35:13.468242  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472223  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.472279  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:35:13.480857  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:35:13.491084  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:35:13.501753  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505877  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.505933  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:35:13.512774  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:35:13.522847  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:35:13.526705  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:35:13.533354  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:35:13.540112  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:35:13.546612  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:35:13.553144  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:35:13.560238  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:35:13.568285  254979 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:35:13.568401  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:35:13.568434  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:35:13.568481  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:35:13.580554  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:13.580617  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:35:13.580665  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:35:13.589430  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:35:13.589492  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:35:13.598285  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:35:13.616427  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:35:13.634472  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:35:13.652547  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:35:13.656296  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:35:13.667861  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.787658  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.800614  254979 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:35:13.800904  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:13.802716  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:35:13.803906  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:35:13.907011  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:35:13.921258  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:35:13.921345  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:35:13.921671  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196598  254979 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:35:44.196684  254979 node_ready.go:38] duration metric: took 30.274978813s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:35:44.196715  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:44.196778  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:44.696945  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.197315  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:45.697715  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.197708  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:46.697596  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.197741  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:47.697273  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.197137  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:48.696833  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.197637  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:49.696961  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.196947  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:50.697707  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.197053  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.697638  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.197170  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:52.697689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.197733  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:53.696981  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.197207  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:54.697745  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.197895  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:55.697086  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.197535  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:56.209362  254979 api_server.go:72] duration metric: took 42.408698512s to wait for apiserver process to appear ...
	I0919 22:35:56.209386  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:56.209404  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:56.215038  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:56.215908  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:56.215931  254979 api_server.go:131] duration metric: took 6.538723ms to wait for apiserver health ...
	I0919 22:35:56.215940  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:56.222250  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:56.222279  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.222289  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.222294  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.222299  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.222306  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.222311  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.222316  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.222322  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.222328  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.222334  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.222342  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.222348  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.222353  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.222359  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.222373  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222385  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.222394  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.222401  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.222409  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.222415  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.222424  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.222432  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.222444  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.222452  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.222459  254979 system_pods.go:74] duration metric: took 6.512304ms to wait for pod list to return data ...
	I0919 22:35:56.222473  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:56.224777  254979 default_sa.go:45] found service account: "default"
	I0919 22:35:56.224800  254979 default_sa.go:55] duration metric: took 2.313413ms for default service account to be created ...
	I0919 22:35:56.224809  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:56.230069  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:56.230095  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:35:56.230102  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:35:56.230139  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:35:56.230151  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:35:56.230157  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:35:56.230165  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:35:56.230173  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:35:56.230181  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:35:56.230189  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:35:56.230194  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:35:56.230202  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:35:56.230207  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:35:56.230215  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:35:56.230221  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:35:56.230234  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230245  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:35:56.230256  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:35:56.230266  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:35:56.230271  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:35:56.230279  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:35:56.230288  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:35:56.230293  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:35:56.230301  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:35:56.230305  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:35:56.230316  254979 system_pods.go:126] duration metric: took 5.500729ms to wait for k8s-apps to be running ...
	I0919 22:35:56.230326  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:56.230378  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:56.242876  254979 system_svc.go:56] duration metric: took 12.542054ms WaitForService to wait for kubelet
	I0919 22:35:56.242903  254979 kubeadm.go:578] duration metric: took 42.442241309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:56.242932  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:56.245954  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.245981  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.245997  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246003  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246012  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:56.246017  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:56.246026  254979 node_conditions.go:105] duration metric: took 3.08778ms to run NodePressure ...
	I0919 22:35:56.246039  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:35:56.246070  254979 start.go:255] writing updated cluster config ...
	I0919 22:35:56.248251  254979 out.go:203] 
	I0919 22:35:56.249459  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:35:56.249573  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.250931  254979 out.go:179] * Starting "ha-434755-m03" control-plane node in "ha-434755" cluster
	I0919 22:35:56.252085  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:35:56.253026  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:35:56.253903  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:35:56.253926  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:35:56.253965  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:35:56.254039  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:35:56.254055  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:35:56.254179  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.276167  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:35:56.276192  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:35:56.276216  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:35:56.276247  254979 start.go:360] acquireMachinesLock for ha-434755-m03: {Name:mk4499ef8414fba131017fb3f66e00435d0a646b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:35:56.276314  254979 start.go:364] duration metric: took 46.178µs to acquireMachinesLock for "ha-434755-m03"
	I0919 22:35:56.276338  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:35:56.276347  254979 fix.go:54] fixHost starting: m03
	I0919 22:35:56.276613  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.293331  254979 fix.go:112] recreateIfNeeded on ha-434755-m03: state=Stopped err=<nil>
	W0919 22:35:56.293356  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:35:56.294620  254979 out.go:252] * Restarting existing docker container for "ha-434755-m03" ...
	I0919 22:35:56.294682  254979 cli_runner.go:164] Run: docker start ha-434755-m03
	I0919 22:35:56.544302  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m03 --format={{.State.Status}}
	I0919 22:35:56.562451  254979 kic.go:430] container "ha-434755-m03" state is running.
	I0919 22:35:56.562784  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:35:56.581792  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:35:56.581992  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:35:56.582050  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:56.600026  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:56.600332  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:56.600350  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:35:56.600929  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44862->127.0.0.1:32823: read: connection reset by peer
	I0919 22:35:59.744345  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.744380  254979 ubuntu.go:182] provisioning hostname "ha-434755-m03"
	I0919 22:35:59.744468  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.762953  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.763211  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.763229  254979 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m03 && echo "ha-434755-m03" | sudo tee /etc/hostname
	I0919 22:35:59.918402  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m03
	
	I0919 22:35:59.918522  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:35:59.938390  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:35:59.938725  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:35:59.938751  254979 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:36:00.092594  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:00.092621  254979 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:36:00.092638  254979 ubuntu.go:190] setting up certificates
	I0919 22:36:00.092648  254979 provision.go:84] configureAuth start
	I0919 22:36:00.092699  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:00.111285  254979 provision.go:143] copyHostCerts
	I0919 22:36:00.111330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111368  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:36:00.111377  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:36:00.111550  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:36:00.111664  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111692  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:36:00.111702  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:36:00.111734  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:36:00.111789  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111815  254979 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:36:00.111822  254979 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:36:00.111851  254979 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:36:00.111906  254979 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m03 san=[127.0.0.1 192.168.49.4 ha-434755-m03 localhost minikube]
	I0919 22:36:00.494093  254979 provision.go:177] copyRemoteCerts
	I0919 22:36:00.494184  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:36:00.494248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.515583  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:00.617642  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:36:00.617700  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:36:00.643926  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:36:00.643995  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:36:00.672921  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:36:00.672984  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:36:00.696141  254979 provision.go:87] duration metric: took 603.480386ms to configureAuth
	I0919 22:36:00.696172  254979 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:36:00.696410  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:00.696474  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.713380  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.713659  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.713680  254979 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:36:00.854280  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:36:00.854306  254979 ubuntu.go:71] root file system type: overlay
	I0919 22:36:00.854441  254979 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:36:00.854527  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:00.877075  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:00.877355  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:00.877461  254979 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:36:01.044491  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:36:01.044612  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.068534  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:01.068808  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:36:01.068828  254979 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:36:01.223884  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:36:01.223911  254979 machine.go:96] duration metric: took 4.641904945s to provisionDockerMachine
	I0919 22:36:01.223926  254979 start.go:293] postStartSetup for "ha-434755-m03" (driver="docker")
	I0919 22:36:01.223940  254979 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:36:01.224000  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:36:01.224053  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.247249  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.353476  254979 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:36:01.356784  254979 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:36:01.356827  254979 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:36:01.356837  254979 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:36:01.356847  254979 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:36:01.356861  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:36:01.356914  254979 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:36:01.356983  254979 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:36:01.356995  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:36:01.357079  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:36:01.366123  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:01.390127  254979 start.go:296] duration metric: took 166.185556ms for postStartSetup
	I0919 22:36:01.390194  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:01.390248  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.407444  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.500338  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:36:01.504828  254979 fix.go:56] duration metric: took 5.228477836s for fixHost
	I0919 22:36:01.504853  254979 start.go:83] releasing machines lock for "ha-434755-m03", held for 5.228525958s
	I0919 22:36:01.504916  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m03
	I0919 22:36:01.524319  254979 out.go:179] * Found network options:
	I0919 22:36:01.525507  254979 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:36:01.526520  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526544  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526563  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:36:01.526574  254979 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:36:01.526649  254979 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:36:01.526654  254979 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:36:01.526686  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.526705  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m03
	I0919 22:36:01.544526  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.545603  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m03/id_rsa Username:docker}
	I0919 22:36:01.637520  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:36:01.728766  254979 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:36:01.728826  254979 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:36:01.738432  254979 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:36:01.738466  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:01.738512  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:01.738626  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:01.755304  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:36:01.764834  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:36:01.774412  254979 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:36:01.774471  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:36:01.783943  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.793341  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:36:01.802524  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:36:01.811594  254979 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:36:01.821804  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:36:01.831556  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:36:01.840844  254979 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:36:01.850193  254979 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:36:01.858696  254979 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:36:01.866797  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:01.986845  254979 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:36:02.197731  254979 start.go:495] detecting cgroup driver to use...
	I0919 22:36:02.197787  254979 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:36:02.197844  254979 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:36:02.210890  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.222293  254979 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:36:02.239996  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:36:02.251285  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:36:02.262578  254979 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:36:02.279146  254979 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:36:02.282932  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:36:02.291330  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:36:02.310148  254979 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:36:02.435893  254979 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:36:02.556587  254979 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:36:02.556638  254979 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:36:02.575909  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:36:02.587513  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:02.699861  254979 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:36:33.801843  254979 ssh_runner.go:235] Completed: sudo systemctl restart docker: (31.101937915s)
	I0919 22:36:33.801930  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:36:33.818125  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:36:33.834866  254979 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:36:33.856162  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:33.868263  254979 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:36:33.959996  254979 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:36:34.048061  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.129937  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:36:34.153114  254979 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:36:34.164068  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:34.253067  254979 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:36:34.329305  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:36:34.341450  254979 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:36:34.341524  254979 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:36:34.345717  254979 start.go:563] Will wait 60s for crictl version
	I0919 22:36:34.345785  254979 ssh_runner.go:195] Run: which crictl
	I0919 22:36:34.349309  254979 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:36:34.384417  254979 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:36:34.384478  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.410290  254979 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:36:34.435551  254979 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:36:34.436601  254979 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:36:34.437771  254979 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:36:34.438757  254979 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:36:34.455686  254979 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:36:34.459411  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:34.471099  254979 mustload.go:65] Loading cluster: ha-434755
	I0919 22:36:34.471369  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:34.471706  254979 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:36:34.488100  254979 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:36:34.488367  254979 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.4
	I0919 22:36:34.488381  254979 certs.go:194] generating shared ca certs ...
	I0919 22:36:34.488395  254979 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:36:34.488553  254979 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:36:34.488618  254979 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:36:34.488633  254979 certs.go:256] generating profile certs ...
	I0919 22:36:34.488734  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:36:34.488804  254979 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.fcdc46d6
	I0919 22:36:34.488858  254979 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:36:34.488871  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:36:34.488892  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:36:34.488912  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:36:34.488929  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:36:34.488945  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:36:34.488961  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:36:34.488983  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:36:34.489000  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:36:34.489057  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:36:34.489095  254979 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:36:34.489107  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:36:34.489136  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:36:34.489176  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:36:34.489207  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:36:34.489261  254979 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:36:34.489295  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:34.489311  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:36:34.489330  254979 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:36:34.489388  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:36:34.506474  254979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:36:34.592737  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:36:34.596550  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:36:34.609026  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:36:34.612572  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:36:34.624601  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:36:34.627756  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:36:34.639526  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:36:34.642628  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:36:34.654080  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:36:34.657248  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:36:34.668694  254979 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:36:34.671921  254979 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:36:34.683466  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:36:34.706717  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:36:34.729514  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:36:34.752135  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:36:34.775534  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:36:34.798386  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:36:34.821220  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:36:34.844089  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:36:34.869124  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:36:34.903928  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:36:34.937896  254979 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:36:34.975415  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:36:35.003119  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:36:35.033569  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:36:35.067233  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:36:35.092336  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:36:35.121987  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:36:35.159147  254979 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:36:35.187449  254979 ssh_runner.go:195] Run: openssl version
	I0919 22:36:35.196710  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:36:35.210371  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215556  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.215667  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:36:35.226373  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:36:35.242338  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:36:35.257634  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.262962  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.263018  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:36:35.272303  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:36:35.284458  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:36:35.297192  254979 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.302970  254979 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.303198  254979 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:36:35.312827  254979 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:36:35.325971  254979 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:36:35.330277  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:36:35.340364  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:36:35.350648  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:36:35.360874  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:36:35.371688  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:36:35.380714  254979 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:36:35.389839  254979 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0919 22:36:35.389978  254979 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:36:35.390024  254979 kube-vip.go:115] generating kube-vip config ...
	I0919 22:36:35.390079  254979 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:36:35.406530  254979 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:35.406626  254979 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:36:35.406688  254979 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:36:35.416527  254979 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:36:35.416590  254979 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:36:35.428557  254979 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:36:35.448698  254979 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:36:35.468117  254979 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:36:35.487717  254979 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:36:35.491337  254979 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:36:35.502239  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.627390  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.641188  254979 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:36:35.641510  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:35.647624  254979 out.go:179] * Verifying Kubernetes components...
	I0919 22:36:35.648653  254979 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:36:35.764651  254979 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:36:35.779233  254979 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:36:35.779307  254979 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:36:35.779583  254979 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782664  254979 node_ready.go:49] node "ha-434755-m03" is "Ready"
	I0919 22:36:35.782690  254979 node_ready.go:38] duration metric: took 3.089431ms for node "ha-434755-m03" to be "Ready" ...
	I0919 22:36:35.782710  254979 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:36:35.782756  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.283749  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:36.783801  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.283597  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.783305  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.283177  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:38.783246  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.283742  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:39.783802  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.283143  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.783619  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.283703  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.783799  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.283102  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:42.783689  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.282927  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:43.783272  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.283621  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:44.783685  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.283492  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:45.783334  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.283701  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:46.783449  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.283236  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.783314  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.283694  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:48.783679  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.283688  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.783717  254979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:49.797519  254979 api_server.go:72] duration metric: took 14.156281107s to wait for apiserver process to appear ...
	I0919 22:36:49.797549  254979 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:36:49.797570  254979 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:36:49.801827  254979 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:36:49.802688  254979 api_server.go:141] control plane version: v1.34.0
	I0919 22:36:49.802713  254979 api_server.go:131] duration metric: took 5.156138ms to wait for apiserver health ...
	I0919 22:36:49.802724  254979 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:36:49.808731  254979 system_pods.go:59] 24 kube-system pods found
	I0919 22:36:49.808759  254979 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.808765  254979 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.808769  254979 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.808774  254979 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.808786  254979 system_pods.go:61] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.808797  254979 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.808802  254979 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.808807  254979 system_pods.go:61] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.808815  254979 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.808820  254979 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.808827  254979 system_pods.go:61] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.808832  254979 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.808840  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.808845  254979 system_pods.go:61] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.808851  254979 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.808857  254979 system_pods.go:61] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.808866  254979 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.808877  254979 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.808886  254979 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.808890  254979 system_pods.go:61] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.808898  254979 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.808903  254979 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.808910  254979 system_pods.go:61] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.808914  254979 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.808924  254979 system_pods.go:74] duration metric: took 6.193414ms to wait for pod list to return data ...
	I0919 22:36:49.808934  254979 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:36:49.811398  254979 default_sa.go:45] found service account: "default"
	I0919 22:36:49.811416  254979 default_sa.go:55] duration metric: took 2.472816ms for default service account to be created ...
	I0919 22:36:49.811424  254979 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:36:49.816515  254979 system_pods.go:86] 24 kube-system pods found
	I0919 22:36:49.816539  254979 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:36:49.816545  254979 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:36:49.816549  254979 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:36:49.816553  254979 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:36:49.816557  254979 system_pods.go:89] "etcd-ha-434755-m03" [6e3492c7-5026-460d-87b4-e3e52a2a36ab] Running
	I0919 22:36:49.816560  254979 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:36:49.816563  254979 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:36:49.816566  254979 system_pods.go:89] "kindnet-jrkrv" [61220abf-7b4e-440a-a5aa-788c5991cacc] Running
	I0919 22:36:49.816570  254979 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:36:49.816573  254979 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:36:49.816579  254979 system_pods.go:89] "kube-apiserver-ha-434755-m03" [acbc85b2-3446-4129-99c3-618e857912fb] Running
	I0919 22:36:49.816583  254979 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:36:49.816586  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running
	I0919 22:36:49.816590  254979 system_pods.go:89] "kube-controller-manager-ha-434755-m03" [3eb7c63e-1489-403e-9409-e9c347fff4c0] Running
	I0919 22:36:49.816593  254979 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:36:49.816600  254979 system_pods.go:89] "kube-proxy-dzrbh" [6a5d3a9f-e63f-43df-bd58-596dc274f097] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:36:49.816608  254979 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:36:49.816614  254979 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:36:49.816617  254979 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:36:49.816620  254979 system_pods.go:89] "kube-scheduler-ha-434755-m03" [65aaaab6-6371-4454-b404-7fe2f6c4e41a] Running
	I0919 22:36:49.816624  254979 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running
	I0919 22:36:49.816627  254979 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:36:49.816630  254979 system_pods.go:89] "kube-vip-ha-434755-m03" [58560a63-dc5d-41bc-9805-e904f49b2cad] Running
	I0919 22:36:49.816632  254979 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running
	I0919 22:36:49.816638  254979 system_pods.go:126] duration metric: took 5.209961ms to wait for k8s-apps to be running ...
	I0919 22:36:49.816646  254979 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:36:49.816685  254979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:49.829643  254979 system_svc.go:56] duration metric: took 12.988959ms WaitForService to wait for kubelet
	I0919 22:36:49.829668  254979 kubeadm.go:578] duration metric: took 14.188435808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:36:49.829689  254979 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:36:49.832790  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832809  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832821  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832826  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832831  254979 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:36:49.832839  254979 node_conditions.go:123] node cpu capacity is 8
	I0919 22:36:49.832844  254979 node_conditions.go:105] duration metric: took 3.149763ms to run NodePressure ...
	I0919 22:36:49.832857  254979 start.go:241] waiting for startup goroutines ...
	I0919 22:36:49.832880  254979 start.go:255] writing updated cluster config ...
	I0919 22:36:49.834545  254979 out.go:203] 
	I0919 22:36:49.835774  254979 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:36:49.835888  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.837288  254979 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:36:49.838260  254979 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:36:49.839218  254979 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:36:49.840185  254979 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:36:49.840202  254979 cache.go:58] Caching tarball of preloaded images
	I0919 22:36:49.840217  254979 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:36:49.840288  254979 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:36:49.840299  254979 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:36:49.840387  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:49.860086  254979 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:36:49.860107  254979 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:36:49.860127  254979 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:36:49.860154  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:36:49.860216  254979 start.go:364] duration metric: took 42.254µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:36:49.860236  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:36:49.860245  254979 fix.go:54] fixHost starting: m04
	I0919 22:36:49.860537  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:49.877660  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:36:49.877688  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:36:49.879872  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:36:49.879927  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:36:50.108344  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:36:50.127577  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:36:50.127896  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:36:50.145596  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:36:50.145849  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:36:50.145921  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:36:50.163888  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:36:50.164152  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:36:50.164171  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:36:50.164828  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56462->127.0.0.1:32828: read: connection reset by peer
	I0919 22:36:53.166776  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:56.168046  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:36:59.169790  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:02.171741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:05.172828  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:08.173440  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:11.174724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:14.176746  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:17.178760  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:20.179240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:23.181529  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:26.182690  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:29.183750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:32.185732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:35.186818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:38.187492  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:41.188831  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:44.189595  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:47.191778  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:50.192786  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:53.193740  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:56.194732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:37:59.195773  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:02.197710  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:05.198608  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:08.199769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:11.200694  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:14.201718  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:17.203754  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:20.204819  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:23.207054  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:26.207724  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:29.208708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:32.210377  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:35.211423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:38.212678  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:41.213761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:44.216005  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:47.217723  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:50.218834  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:53.220905  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:56.221494  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:38:59.222787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:02.224748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:05.225885  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:08.226688  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:11.228737  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:14.230719  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:17.232761  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.233716  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.234909  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.236732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.237733  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.239782  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.240787  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.241853  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.243182  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.245159  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.246728  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.247035  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:50.247075  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:39:50.247172  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.267390  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.267465  254979 machine.go:96] duration metric: took 3m0.121600261s to provisionDockerMachine
	I0919 22:39:50.267561  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:50.267599  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.284438  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.284611  254979 retry.go:31] will retry after 316.809243ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.601960  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.624526  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.624657  254979 retry.go:31] will retry after 330.8195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:50.956237  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:50.973928  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:50.974043  254979 retry.go:31] will retry after 838.035272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.812938  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.833782  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:51.833951  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:51.833974  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:51.834032  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:51.834079  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:51.854105  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:51.854225  254979 retry.go:31] will retry after 224.006538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.078741  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.096705  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.096817  254979 retry.go:31] will retry after 423.331741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.520446  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.540094  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.540200  254979 retry.go:31] will retry after 355.89061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:52.896715  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:52.915594  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:39:52.915696  254979 retry.go:31] will retry after 642.935309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.559619  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:39:53.577650  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:39:53.577803  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577829  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.577840  254979 fix.go:56] duration metric: took 3m3.717595523s for fixHost
	I0919 22:39:53.577850  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.717623259s
	W0919 22:39:53.577867  254979 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:39:53.577986  254979 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:39:53.578002  254979 start.go:729] Will try again in 5 seconds ...
	I0919 22:39:58.578679  254979 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:58.578811  254979 start.go:364] duration metric: took 67.723µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:39:58.578838  254979 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:58.578849  254979 fix.go:54] fixHost starting: m04
	I0919 22:39:58.579176  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.599096  254979 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:39:58.599126  254979 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:58.600560  254979 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:39:58.600634  254979 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:39:58.859923  254979 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:39:58.879236  254979 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:39:58.879668  254979 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:39:58.897236  254979 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:39:58.897463  254979 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:58.897552  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:39:58.918053  254979 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:58.918271  254979 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:39:58.918281  254979 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:58.918874  254979 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38044->127.0.0.1:32833: read: connection reset by peer
	I0919 22:40:01.920959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:04.921476  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:07.922288  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:10.923340  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:13.923844  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:16.925745  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:19.926668  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:22.928799  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:25.930210  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:28.930708  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:31.933147  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:34.934423  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:37.934726  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:40.935749  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:43.937730  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:46.940224  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:49.940869  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:52.941959  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:55.943080  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:40:58.944241  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:01.945832  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:04.946150  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:07.947240  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:10.947732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:13.949692  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:16.951725  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:19.952381  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:22.953741  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:25.954706  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:28.955793  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:31.957862  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:34.959138  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:37.960247  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:40.961431  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:43.962702  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:46.964762  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:49.965365  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:52.966748  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:55.968435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:41:58.968992  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:01.970768  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:04.971818  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:07.972196  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:10.973355  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:13.974698  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:16.976791  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:19.977362  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:22.979658  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:25.981435  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.981739  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.983953  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.984393  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.984732  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.985736  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.987769  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.989756  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.990750  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.991490  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.991855  254979 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.992596  254979 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:58.992632  254979 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:42:58.992719  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.013746  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.013831  254979 machine.go:96] duration metric: took 3m0.116353121s to provisionDockerMachine
	I0919 22:42:59.013918  254979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:59.013953  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.033883  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.033989  254979 retry.go:31] will retry after 316.823283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.351622  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.370204  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.370320  254979 retry.go:31] will retry after 311.292492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:59.682751  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:42:59.702069  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:42:59.702202  254979 retry.go:31] will retry after 591.889704ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.294731  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.313949  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:00.314105  254979 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:00.314125  254979 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.314184  254979 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:00.314230  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.331741  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.331862  254979 retry.go:31] will retry after 207.410605ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.540373  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.558832  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.558943  254979 retry.go:31] will retry after 400.484554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:00.960435  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:00.980834  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	I0919 22:43:00.980981  254979 retry.go:31] will retry after 805.175329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.786666  254979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	W0919 22:43:01.804452  254979 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04 returned with exit code 1
	W0919 22:43:01.804589  254979 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.804609  254979 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.804626  254979 fix.go:56] duration metric: took 3m3.225778678s for fixHost
	I0919 22:43:01.804633  254979 start.go:83] releasing machines lock for "ha-434755-m04", held for 3m3.225810313s
	W0919 22:43:01.804739  254979 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-434755" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:43:01.806803  254979 out.go:203] 
	W0919 22:43:01.808013  254979 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:43:01.808027  254979 out.go:285] * 
	W0919 22:43:01.810171  254979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:43:01.811468  254979 out.go:203] 
	
	
	==> Docker <==
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Setting cgroupDriver systemd"
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 19 22:34:36 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:36Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 19 22:34:36 ha-434755 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bc57496cf8c97a97999359a9838b6036be50e94cb061c0b1a8b8d03c6c47882f\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-v7khr_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b8668e832861f0d8c563a666baa0cea2ac4eb0f8ddf17fd82917820d5006259\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3041d5d93037c86c3cfadae837272511c922a063939621dadb3263b72427c10/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a6b58aa00fb3ed47c31437427373513e3cf158ba0f49315f653ed171815d1ae/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ee54e9ddf31eb43f3d1b92eb3fba3f59792644b4cca713389d08f8df0ca678ef/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3d21bfdf988a075c914dace11f808a9b5349ae9667593ff7a4af4b2c491050a8/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:37 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd64b2298ea2e14f8a79f2ef7cbc281f0a4cc54d3c5b88870d2317cf4e796496/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:38 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:34:38 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/474504d27788a62fc731085b07e40bfd02db95b0dee6eb9f01e76872ac1b4613/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0571a9b22aa8dba90ce65f75de015c275de4f02c9b11d07445117722c8bd5410/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/16320e14d7e184563d15b2804dbf3e9612c480a8dcb1c6db031a96760d11777b/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bcc3d90f1ae423c076bac3bff5068dc970a3e0231e8ff9693d1501842df84ab/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8d662a6a0cce0d2a16826cebfb1f342627aa7c367df671adf5932fdf952bcb33/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:34:57 ha-434755 cri-dockerd[1121]: time="2025-09-19T22:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/11b728526ee593e5f0a5d07ce40d5d8d85f6444e5024cf0803eda48dfdeacbbd/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:35:17 ha-434755 dockerd[809]: time="2025-09-19T22:35:17.095642158Z" level=info msg="ignoring event" container=9f3583c0285479d52f54ce342fa39a2bf968d32dd01c6fa37ed4e82770c0069a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:35:27 ha-434755 dockerd[809]: time="2025-09-19T22:35:27.740317296Z" level=info msg="ignoring event" container=e18b45e159c1182e66b623c3d7b119a97e0abd68eb463ffb6cf7841ae7b09580 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd2048728598c       6e38f40d628db                                                                                         7 minutes ago       Running             storage-provisioner       3                   5bcc3d90f1ae4       storage-provisioner
	f2e4587626b5c       765655ea60781                                                                                         7 minutes ago       Running             kube-vip                  1                   3d21bfdf988a0       kube-vip-ha-434755
	e18b45e159c11       6e38f40d628db                                                                                         8 minutes ago       Exited              storage-provisioner       2                   5bcc3d90f1ae4       storage-provisioner
	c9a94a8bca16c       409467f978b4a                                                                                         8 minutes ago       Running             kindnet-cni               1                   11b728526ee59       kindnet-djvx4
	9a99065ed6ffc       8c811b4aec35f                                                                                         8 minutes ago       Running             busybox                   1                   8d662a6a0cce0       busybox-7b57f96db7-v7khr
	d61ae6148e697       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   3                   16320e14d7e18       coredns-66bc5c9577-w8trg
	54785bb274bdd       df0860106674d                                                                                         8 minutes ago       Running             kube-proxy                1                   474504d27788a       kube-proxy-gzpg8
	ad8e40cf82bf1       52546a367cc9e                                                                                         8 minutes ago       Running             coredns                   3                   0571a9b22aa8d       coredns-66bc5c9577-4lmln
	af499a9e8d13a       5f1f5298c888d                                                                                         8 minutes ago       Running             etcd                      1                   e3041d5d93037       etcd-ha-434755
	9f3583c028547       765655ea60781                                                                                         8 minutes ago       Exited              kube-vip                  0                   3d21bfdf988a0       kube-vip-ha-434755
	53ac6087206b0       46169d968e920                                                                                         8 minutes ago       Running             kube-scheduler            1                   bd64b2298ea2e       kube-scheduler-ha-434755
	379f8eb19bc07       a0af72f2ec6d6                                                                                         8 minutes ago       Running             kube-controller-manager   1                   ee54e9ddf31eb       kube-controller-manager-ha-434755
	deaf26f878611       90550c43ad2bc                                                                                         8 minutes ago       Running             kube-apiserver            1                   0a6b58aa00fb3       kube-apiserver-ha-434755
	3fa0541fe0158       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Exited              busybox                   0                   6b8668e832861       busybox-7b57f96db7-v7khr
	276fb29221693       52546a367cc9e                                                                                         18 minutes ago      Exited              coredns                   2                   b69dcaba1fe3e       coredns-66bc5c9577-w8trg
	88736f55e64e2       52546a367cc9e                                                                                         18 minutes ago      Exited              coredns                   2                   62cd9dd3b99a7       coredns-66bc5c9577-4lmln
	acbbcaa7a50ef       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              18 minutes ago      Exited              kindnet-cni               0                   41bb0b28153e1       kindnet-djvx4
	c4058cbf0779f       df0860106674d                                                                                         18 minutes ago      Exited              kube-proxy                0                   0bfeca1ad0bad       kube-proxy-gzpg8
	baeef3d333816       90550c43ad2bc                                                                                         18 minutes ago      Exited              kube-apiserver            0                   ba9ef91c2ce68       kube-apiserver-ha-434755
	f040530b17342       5f1f5298c888d                                                                                         18 minutes ago      Exited              etcd                      0                   aae975e95bddb       etcd-ha-434755
	3b75df9b742b1       46169d968e920                                                                                         18 minutes ago      Exited              kube-scheduler            0                   1e4f3e71f1dc3       kube-scheduler-ha-434755
	9d7035076f5b1       a0af72f2ec6d6                                                                                         18 minutes ago      Exited              kube-controller-manager   0                   88eef40585d59       kube-controller-manager-ha-434755
	
	
	==> coredns [276fb2922169] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37194 - 28984 "HINFO IN 5214134008379897248.7815776382534054762. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027124502s
	[INFO] 10.244.1.2:57733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000335719s
	[INFO] 10.244.1.2:49281 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.010821929s
	[INFO] 10.244.1.2:34537 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.028508329s
	[INFO] 10.244.1.2:44238 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.016387542s
	[INFO] 10.244.0.4:39774 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177448s
	[INFO] 10.244.0.4:44496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.001738346s
	[INFO] 10.244.0.4:58392 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.00011424s
	[INFO] 10.244.0.4:35209 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000116366s
	[INFO] 10.244.1.2:52925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159242s
	[INFO] 10.244.1.2:50710 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010576139s
	[INFO] 10.244.1.2:47404 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152442s
	[INFO] 10.244.1.2:47712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150108s
	[INFO] 10.244.0.4:43223 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003674617s
	[INFO] 10.244.0.4:42415 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141424s
	[INFO] 10.244.0.4:32958 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012527s
	[INFO] 10.244.1.2:50122 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162191s
	[INFO] 10.244.1.2:44215 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000246608s
	[INFO] 10.244.1.2:56477 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190468s
	[INFO] 10.244.0.4:48664 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099276s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [88736f55e64e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58640 - 48004 "HINFO IN 2245373388099208717.3878449857039646311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027376041s
	[INFO] 10.244.1.2:43893 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.003165088s
	[INFO] 10.244.0.4:47799 - 5 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000915571s
	[INFO] 10.244.1.2:34293 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202813s
	[INFO] 10.244.1.2:50046 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.003537032s
	[INFO] 10.244.1.2:53810 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128737s
	[INFO] 10.244.1.2:35843 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143851s
	[INFO] 10.244.0.4:54400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205673s
	[INFO] 10.244.0.4:56117 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009425405s
	[INFO] 10.244.0.4:39564 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129639s
	[INFO] 10.244.0.4:54274 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131374s
	[INFO] 10.244.0.4:50859 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130495s
	[INFO] 10.244.1.2:44278 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130236s
	[INFO] 10.244.0.4:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144165s
	[INFO] 10.244.0.4:37008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206655s
	[INFO] 10.244.0.4:33346 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151507s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ad8e40cf82bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54656 - 31900 "HINFO IN 352629652807927435.4937880101774792236. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027954607s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [d61ae6148e69] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33352 - 30613 "HINFO IN 7566855018603772192.7692448748435092535. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034224338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:41:44 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 77a4720958d84b7eaaec886ee550a10f
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m14s                  kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     18m                    kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    18m                    kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 18m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                    kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           18m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           9m40s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m39s (x8 over 8m39s)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s (x8 over 8m39s)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s (x7 over 8m39s)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           7m13s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m38s                  node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:41:18 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 547644a749674c618fb4cf640be170c7
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         17m
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m51s                  kube-proxy       
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m40s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  Starting                 8m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m38s (x8 over 8m38s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m38s (x8 over 8m38s)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s (x7 over 8m38s)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeNotReady             7m26s                  node-controller  Node ha-434755-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           7m13s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           6m38s                  node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [af499a9e8d13] <==
	{"level":"info","ts":"2025-09-19T22:36:41.206107Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.206610Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"6088e2429f689fd8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:36:41.206642Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.217881Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:36:41.217883Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.847338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:43:07.862893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:43:07.871243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55900","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:43:07.880286Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12222697724345399935 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:43:07.881422Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"6088e2429f689fd8","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:43:07.881466Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.881846Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.881895Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887325Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887376Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887410Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887550Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887588Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"6088e2429f689fd8","error":"failed to read 6088e2429f689fd8 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-19T22:43:07.887615Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.887688Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:43:07.887721Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887728Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:43:07.887742Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.890079Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"6088e2429f689fd8"}
	{"level":"warn","ts":"2025-09-19T22:43:07.891371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:46986","server-name":"","error":"read tcp 192.168.49.2:2380->192.168.49.4:46986: read: connection reset by peer"}
	
	
	==> etcd [f040530b1734] <==
	{"level":"info","ts":"2025-09-19T22:34:25.770918Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770902Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770951Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:34:25.770958Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:34:25.770961Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.770964Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"error","ts":"2025-09-19T22:34:25.770967Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.770983Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771005Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771048Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771078Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771112Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771119Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:34:25.771126Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771158Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771178Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771533Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771565Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771593Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.771605Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"6088e2429f689fd8"}
	{"level":"info","ts":"2025-09-19T22:34:25.773232Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-19T22:34:25.773292Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:34:25.773326Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-19T22:34:25.773340Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-434755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:43:16 up  1:25,  0 users,  load average: 1.48, 1.54, 12.40
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [acbbcaa7a50e] <==
	I0919 22:33:33.792856       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:43.793581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:43.793641       1 main.go:301] handling current node
	I0919 22:33:43.793662       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:43.793669       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:43.793876       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:43.793892       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:33:53.797667       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:53.797706       1 main.go:301] handling current node
	I0919 22:33:53.797728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:33:53.797735       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:33:53.797927       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:33:53.797943       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:34:03.791573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:03.791611       1 main.go:301] handling current node
	I0919 22:34:03.791641       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:34:03.791648       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:34:03.791853       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:34:03.791867       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:34:13.793236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:34:13.793265       1 main.go:301] handling current node
	I0919 22:34:13.793295       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:34:13.793300       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:34:13.793467       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:34:13.793476       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [c9a94a8bca16] <==
	I0919 22:42:28.398413       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:38.398823       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:38.398878       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:38.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:38.399120       1 main.go:301] handling current node
	I0919 22:42:38.399136       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:38.399142       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398754       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:48.398788       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398985       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:48.399000       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:48.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:48.399114       1 main.go:301] handling current node
	I0919 22:42:58.397777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:58.397818       1 main.go:301] handling current node
	I0919 22:42:58.397838       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:58.397844       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:58.398040       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:58.398053       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:43:08.398799       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:43:08.398837       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:43:08.399068       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:43:08.399086       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:43:08.399203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:43:08.399215       1 main.go:301] handling current node
	
	
	==> kube-apiserver [baeef3d33381] <==
	W0919 22:34:28.088519       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.091813       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.098214       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.136852       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.144149       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.260258       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.261581       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.262865       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.267338       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.271648       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.310107       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.353280       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0919 22:34:28.398855       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:34:28.418582       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.455050       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.495310       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.523204       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.552947       1 logging.go:55] [core] [Channel #11 SubChannel #13]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.598893       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.615348       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.668129       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.682280       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.690932       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.713514       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:34:28.755606       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [deaf26f87861] <==
	W0919 22:35:45.088376       1 cacher.go:182] Terminating all watchers from cacher clusterroles.rbac.authorization.k8s.io
	W0919 22:35:45.088419       1 cacher.go:182] Terminating all watchers from cacher leases.coordination.k8s.io
	W0919 22:35:45.088450       1 cacher.go:182] Terminating all watchers from cacher limitranges
	W0919 22:35:45.088575       1 cacher.go:182] Terminating all watchers from cacher namespaces
	W0919 22:35:45.088601       1 cacher.go:182] Terminating all watchers from cacher poddisruptionbudgets.policy
	W0919 22:35:45.088638       1 cacher.go:182] Terminating all watchers from cacher customresourcedefinitions.apiextensions.k8s.io
	W0919 22:35:45.087060       1 cacher.go:182] Terminating all watchers from cacher podtemplates
	W0919 22:35:45.087171       1 cacher.go:182] Terminating all watchers from cacher validatingwebhookconfigurations.admissionregistration.k8s.io
	W0919 22:35:45.088937       1 cacher.go:182] Terminating all watchers from cacher horizontalpodautoscalers.autoscaling
	W0919 22:35:45.088939       1 cacher.go:182] Terminating all watchers from cacher controllerrevisions.apps
	I0919 22:35:45.947836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:35:50.477780       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:35:57.503906       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:13.278842       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:00.219288       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:30.363569       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:02.702552       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:38:45.378514       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:20.466062       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:54.026196       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:40:30.227200       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:01.678001       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:35.549736       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:22.195913       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:01.261730       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [379f8eb19bc0] <==
	I0919 22:35:00.446686       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:35:00.448277       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:35:00.468548       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 22:35:00.470805       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:35:00.473226       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:35:00.473248       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:35:00.473274       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:35:00.473273       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:35:00.473294       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:35:00.473349       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:35:00.473933       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:35:00.473968       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:35:00.477672       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 22:35:00.477725       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 22:35:00.477771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 22:35:00.477781       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 22:35:00.477781       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:35:00.477788       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 22:35:00.486920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:35:00.489123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 22:35:00.491334       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:35:00.493617       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:35:00.495803       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:35:00.498093       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:35:00.499331       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [9d7035076f5b] <==
	I0919 22:24:46.729892       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:24:46.729917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:46.730126       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:24:46.730563       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 22:24:46.730598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:24:46.730680       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 22:24:46.731332       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:46.733702       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:46.734879       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:24:46.739793       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:24:46.745067       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:24:46.756573       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759762       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:46.759775       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:46.759781       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E0919 22:25:16.502891       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8gznq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8gznq\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:16.953356       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-btr4q\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:25:16.953452       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6bf58c8f-abca-468b-a2c7-04acb3bb338e", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-btr4q EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-btr4q": the object has been modified; please apply your changes to the latest version and try again
	I0919 22:25:17.013440       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m02\" does not exist"
	I0919 22:25:17.029166       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:25:21.734993       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m02"
	E0919 22:25:38.070022       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-2nm58 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-2nm58\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:25:38.835123       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-434755-m03\" does not exist"
	I0919 22:25:38.849612       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-434755-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:25:41.746239       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-434755-m03"
	
	
	==> kube-proxy [54785bb274bd] <==
	I0919 22:34:57.761058       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:34:57.833193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:35:00.913912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-434755&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0919 22:35:01.834138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:35:01.834169       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:35:01.834256       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:35:01.855270       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:35:01.855328       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:35:01.860764       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:35:01.861199       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:35:01.861231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:35:01.862567       1 config.go:200] "Starting service config controller"
	I0919 22:35:01.862599       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:35:01.862627       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:35:01.862658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:35:01.862680       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:35:01.862685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:35:01.862736       1 config.go:309] "Starting node config controller"
	I0919 22:35:01.863095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:35:01.863114       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:35:01.963632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:35:01.963649       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:35:01.963870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c4058cbf0779] <==
	I0919 22:24:49.209419       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:49.290786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:49.391927       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:49.391969       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:24:49.392097       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:49.414535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:24:49.414600       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:49.419756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:49.420226       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:49.420264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:49.421883       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:49.421917       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:49.421937       1 config.go:200] "Starting service config controller"
	I0919 22:24:49.421945       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:49.422002       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:49.422054       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:49.422089       1 config.go:309] "Starting node config controller"
	I0919 22:24:49.422095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:49.522136       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:49.522161       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:49.522187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:49.522304       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3b75df9b742b] <==
	E0919 22:24:40.757342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:24:40.789762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:24:40.800954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:24:40.811376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:24:40.825276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:24:40.860558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:24:40.875460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:24:43.743600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:25:17.048594       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.048715       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod a477a521-e24b-449d-854f-c873cb517164(kube-system/kube-proxy-4cnsm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048747       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4cnsm\": pod kube-proxy-4cnsm is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-4cnsm"
	E0919 22:25:17.048815       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" plugin="DefaultBinder" pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:25:17.048849       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 06bab6e9-ad22-4651-947e-723307c31d04(kube-system/kindnet-74q9s) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050318       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4cnsm" node="ha-434755-m02"
	E0919 22:25:17.050187       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-74q9s\": pod kindnet-74q9s is already assigned to node \"ha-434755-m02\"" logger="UnhandledError" pod="kube-system/kindnet-74q9s"
	I0919 22:25:17.050575       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-74q9s" node="ha-434755-m02"
	E0919 22:29:45.846569       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	E0919 22:29:45.849277       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-5x7p2\": pod busybox-7b57f96db7-5x7p2 is already assigned to node \"ha-434755-m03\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-5x7p2"
	I0919 22:29:45.855649       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-5x7p2" node="ha-434755-m03"
	I0919 22:34:18.774597       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:34:18.774662       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:34:18.774692       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:34:18.774767       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:18.774826       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:34:18.774850       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [53ac6087206b] <==
	I0919 22:34:38.691784       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:34:49.254859       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0919 22:34:49.254890       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:34:49.254896       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:34:56.962003       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:34:56.962030       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:34:56.963821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.963864       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.964116       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:34:56.964511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:34:57.064621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:41:07 ha-434755 kubelet[1340]: E0919 22:41:07.280938    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:17 ha-434755 kubelet[1340]: E0919 22:41:17.285674    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:17 ha-434755 kubelet[1340]: E0919 22:41:17.285783    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:27 ha-434755 kubelet[1340]: E0919 22:41:27.289035    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:27 ha-434755 kubelet[1340]: E0919 22:41:27.289121    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757571 maxSize=10485760
	Sep 19 22:41:37 ha-434755 kubelet[1340]: E0919 22:41:37.296179    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:37 ha-434755 kubelet[1340]: E0919 22:41:37.296280    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:41:47 ha-434755 kubelet[1340]: E0919 22:41:47.299156    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:47 ha-434755 kubelet[1340]: E0919 22:41:47.299257    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:41:57 ha-434755 kubelet[1340]: E0919 22:41:57.303655    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:41:57 ha-434755 kubelet[1340]: E0919 22:41:57.303736    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:07 ha-434755 kubelet[1340]: E0919 22:42:07.307724    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:07 ha-434755 kubelet[1340]: E0919 22:42:07.308098    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:17 ha-434755 kubelet[1340]: E0919 22:42:17.317113    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:17 ha-434755 kubelet[1340]: E0919 22:42:17.317223    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757736 maxSize=10485760
	Sep 19 22:42:27 ha-434755 kubelet[1340]: E0919 22:42:27.320642    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:27 ha-434755 kubelet[1340]: E0919 22:42:27.320728    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:37 ha-434755 kubelet[1340]: E0919 22:42:37.327066    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:37 ha-434755 kubelet[1340]: E0919 22:42:37.327175    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:47 ha-434755 kubelet[1340]: E0919 22:42:47.333029    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:47 ha-434755 kubelet[1340]: E0919 22:42:47.333130    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:42:57 ha-434755 kubelet[1340]: E0919 22:42:57.335444    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:42:57 ha-434755 kubelet[1340]: E0919 22:42:57.335565    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58757901 maxSize=10485760
	Sep 19 22:43:07 ha-434755 kubelet[1340]: E0919 22:43:07.338836    1340 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd"
	Sep 19 22:43:07 ha-434755 kubelet[1340]: E0919 22:43:07.338927    1340 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log\": failed to reopen container log \"deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="deaf26f8786117a37d9391787b183b9240bcfbcc6788e3a0956ed084f1a802cd" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/1.log" currentSize=58758066 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hhbsb
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-434755 describe pod busybox-7b57f96db7-hhbsb
helpers_test.go:290: (dbg) kubectl --context ha-434755 describe pod busybox-7b57f96db7-hhbsb:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hhbsb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwqfz (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rwqfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (643.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0919 22:44:56.532715  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:47:25.092000  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:48:33.466885  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:52:25.091710  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:53:33.466553  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: signal: killed (10m41.792022748s)

                                                
                                                
-- stdout --
	* [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Enabled addons: 
	
	* Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-434755-m04" worker node in "ha-434755" cluster
	* Pulling base image v0.0.48 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:43:39.291527  306754 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:43:39.291792  306754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:39.291803  306754 out.go:374] Setting ErrFile to fd 2...
	I0919 22:43:39.291807  306754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:39.291977  306754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:43:39.292414  306754 out.go:368] Setting JSON to false
	I0919 22:43:39.293376  306754 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5155,"bootTime":1758316664,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:43:39.293466  306754 start.go:140] virtualization: kvm guest
	I0919 22:43:39.295239  306754 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:43:39.296330  306754 notify.go:220] Checking for updates...
	I0919 22:43:39.296345  306754 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:43:39.297493  306754 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:43:39.298603  306754 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:43:39.299685  306754 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:43:39.300719  306754 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:43:39.301699  306754 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:43:39.304205  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:39.304960  306754 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:43:39.330266  306754 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:43:39.330337  306754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:43:39.386701  306754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:43:39.376744233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:43:39.386820  306754 docker.go:318] overlay module found
	I0919 22:43:39.388240  306754 out.go:179] * Using the docker driver based on existing profile
	I0919 22:43:39.389026  306754 start.go:304] selected driver: docker
	I0919 22:43:39.389036  306754 start.go:918] validating driver "docker" against &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fa
lse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:43:39.389153  306754 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:43:39.389237  306754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:43:39.443590  306754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:43:39.432336958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:43:39.444168  306754 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:43:39.444201  306754 cni.go:84] Creating CNI manager for ""
	I0919 22:43:39.444262  306754 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:43:39.444310  306754 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvid
ia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:43:39.445658  306754 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:43:39.446485  306754 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:43:39.447344  306754 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:43:39.448169  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:43:39.448218  306754 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:43:39.448232  306754 cache.go:58] Caching tarball of preloaded images
	I0919 22:43:39.448266  306754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:43:39.448335  306754 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:43:39.448347  306754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:43:39.448491  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:39.467255  306754 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:43:39.467272  306754 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:43:39.467293  306754 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:43:39.467321  306754 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:39.467379  306754 start.go:364] duration metric: took 36.929µs to acquireMachinesLock for "ha-434755"
	I0919 22:43:39.467400  306754 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:39.467411  306754 fix.go:54] fixHost starting: 
	I0919 22:43:39.467648  306754 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:43:39.483723  306754 fix.go:112] recreateIfNeeded on ha-434755: state=Stopped err=<nil>
	W0919 22:43:39.483782  306754 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:39.485185  306754 out.go:252] * Restarting existing docker container for "ha-434755" ...
	I0919 22:43:39.485264  306754 cli_runner.go:164] Run: docker start ha-434755
	I0919 22:43:39.702988  306754 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:43:39.721012  306754 kic.go:430] container "ha-434755" state is running.
	I0919 22:43:39.721394  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:39.738252  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:39.738464  306754 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:39.738564  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:39.756374  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:39.756640  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:39.756655  306754 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:39.757274  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53890->127.0.0.1:32838: read: connection reset by peer
	I0919 22:43:42.892336  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:43:42.892367  306754 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:43:42.892421  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:42.910465  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:42.910692  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:42.910707  306754 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:43:43.055420  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:43:43.055518  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:43.072353  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:43.072584  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:43.072601  306754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:43:43.205696  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:43.205737  306754 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:43:43.205755  306754 ubuntu.go:190] setting up certificates
	I0919 22:43:43.205765  306754 provision.go:84] configureAuth start
	I0919 22:43:43.205813  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:43.223226  306754 provision.go:143] copyHostCerts
	I0919 22:43:43.223281  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:43.223330  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:43:43.223350  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:43.223439  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:43:43.223611  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:43.223651  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:43:43.223662  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:43.223708  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:43:43.223777  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:43.223801  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:43:43.223810  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:43.223846  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:43:43.223915  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:43:43.965915  306754 provision.go:177] copyRemoteCerts
	I0919 22:43:43.965993  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:43:43.966049  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:43.983465  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.078601  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:43:44.078662  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:43:44.101554  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:43:44.101604  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:43:44.124200  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:43:44.124267  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:43:44.146653  306754 provision.go:87] duration metric: took 940.871108ms to configureAuth
	I0919 22:43:44.146681  306754 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:43:44.146886  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:44.146935  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.163438  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:44.163672  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:44.163685  306754 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:43:44.295935  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:43:44.295957  306754 ubuntu.go:71] root file system type: overlay
	I0919 22:43:44.296086  306754 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:43:44.296154  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.312772  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:44.313045  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:44.313156  306754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:43:44.456912  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:43:44.456987  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.473755  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:44.473964  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:44.473981  306754 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:43:44.610584  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:44.610613  306754 machine.go:96] duration metric: took 4.872132827s to provisionDockerMachine
	I0919 22:43:44.610629  306754 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:43:44.610644  306754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:43:44.610702  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:43:44.610742  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.627928  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.723800  306754 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:43:44.726896  306754 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:43:44.726923  306754 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:43:44.726930  306754 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:43:44.726938  306754 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:43:44.726949  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:43:44.726998  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:43:44.727084  306754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:43:44.727097  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:43:44.727179  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:43:44.735596  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:43:44.759329  306754 start.go:296] duration metric: took 148.683381ms for postStartSetup
	I0919 22:43:44.759401  306754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:44.759446  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.776107  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.867158  306754 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:44.871450  306754 fix.go:56] duration metric: took 5.40403423s for fixHost
	I0919 22:43:44.871474  306754 start.go:83] releasing machines lock for "ha-434755", held for 5.404084037s
	I0919 22:43:44.871564  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:44.888349  306754 ssh_runner.go:195] Run: cat /version.json
	I0919 22:43:44.888391  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.888423  306754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:43:44.888478  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.906330  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.906450  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:45.067442  306754 ssh_runner.go:195] Run: systemctl --version
	I0919 22:43:45.072316  306754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:43:45.076762  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:43:45.095068  306754 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:43:45.095126  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:43:45.103588  306754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:43:45.103614  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:45.103647  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:45.103772  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:45.119318  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:43:45.128686  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:43:45.137849  306754 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:43:45.137901  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:43:45.147058  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:45.156204  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:43:45.165069  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:45.174076  306754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:43:45.182617  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:43:45.191827  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:43:45.200803  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:43:45.210038  306754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:43:45.217896  306754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:43:45.225661  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:45.290430  306754 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:43:45.365571  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:45.365619  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:45.365667  306754 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:43:45.378147  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:45.388969  306754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:43:45.403457  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:45.413886  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:43:45.424777  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:45.440560  306754 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:43:45.443748  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:43:45.451757  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:43:45.468855  306754 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:43:45.535439  306754 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:43:45.595832  306754 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:43:45.595947  306754 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:43:45.613447  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:43:45.623701  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:45.684600  306754 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:43:46.473688  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:43:46.484847  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:43:46.495132  306754 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:43:46.506171  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:43:46.516348  306754 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:43:46.580356  306754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:43:46.646484  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:46.710711  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:43:46.735360  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:43:46.745865  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:46.810610  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:43:46.888676  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:43:46.900040  306754 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:43:46.900100  306754 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:43:46.903517  306754 start.go:563] Will wait 60s for crictl version
	I0919 22:43:46.903571  306754 ssh_runner.go:195] Run: which crictl
	I0919 22:43:46.906866  306754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:43:46.941336  306754 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:43:46.941405  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:43:46.966952  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:43:46.993474  306754 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:43:46.993567  306754 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:43:47.011223  306754 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:43:47.015448  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:43:47.027916  306754 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:43:47.028086  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:43:47.028160  306754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:43:47.048532  306754 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:43:47.048559  306754 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:43:47.048634  306754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:43:47.070048  306754 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:43:47.070070  306754 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:43:47.070080  306754 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:43:47.070188  306754 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:43:47.070235  306754 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:43:47.120483  306754 cni.go:84] Creating CNI manager for ""
	I0919 22:43:47.120524  306754 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:43:47.120541  306754 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:43:47.120570  306754 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:43:47.120727  306754 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:43:47.120750  306754 kube-vip.go:115] generating kube-vip config ...
	I0919 22:43:47.120798  306754 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:43:47.133139  306754 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:43:47.133242  306754 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:43:47.133296  306754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:43:47.142163  306754 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:43:47.142230  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:43:47.150294  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:43:47.167116  306754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:43:47.183593  306754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:43:47.200026  306754 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:43:47.216296  306754 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:43:47.219560  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:43:47.229904  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:47.292236  306754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:43:47.316513  306754 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:43:47.316534  306754 certs.go:194] generating shared ca certs ...
	I0919 22:43:47.316549  306754 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:47.316708  306754 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:43:47.316752  306754 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:43:47.316763  306754 certs.go:256] generating profile certs ...
	I0919 22:43:47.316834  306754 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:43:47.316856  306754 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e
	I0919 22:43:47.316868  306754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:43:47.496821  306754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e ...
	I0919 22:43:47.496848  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e: {Name:mk87454dee6a5f83a043f9122902a6a0c377141b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:47.496989  306754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e ...
	I0919 22:43:47.497001  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e: {Name:mk152a431c9e22f2691899ae04ddcffa44174e39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:47.497080  306754 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:43:47.497202  306754 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:43:47.497333  306754 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:43:47.497352  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:43:47.497369  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:43:47.497389  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:43:47.497405  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:43:47.497416  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:43:47.497435  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:43:47.497453  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:43:47.497471  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:43:47.497543  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:43:47.497587  306754 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:43:47.497604  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:43:47.497634  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:43:47.497662  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:43:47.497693  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:43:47.497746  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:43:47.497786  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.497805  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.497825  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.498546  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:43:47.531566  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:43:47.558416  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:43:47.584122  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:43:47.609680  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:43:47.635691  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:43:47.658751  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:43:47.681516  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:43:47.703759  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:43:47.726075  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:43:47.748426  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:43:47.770942  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:43:47.787594  306754 ssh_runner.go:195] Run: openssl version
	I0919 22:43:47.792671  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:43:47.801510  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.804786  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.804830  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.810997  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:43:47.819279  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:43:47.829985  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.834062  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.834119  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.842120  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:43:47.853981  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:43:47.865286  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.868927  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.868965  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.876156  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:43:47.887371  306754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:43:47.891855  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:43:47.902396  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:43:47.912064  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:43:47.921218  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:43:47.929487  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:43:47.936335  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:43:47.942807  306754 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:43:47.942973  306754 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:43:47.976080  306754 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:43:47.991060  306754 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:43:47.991082  306754 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:43:47.991134  306754 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:43:48.002447  306754 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:43:48.002992  306754 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-434755" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:43:48.003190  306754 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "ha-434755" cluster setting kubeconfig missing "ha-434755" context setting]
	I0919 22:43:48.003645  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:48.004375  306754 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:43:48.004967  306754 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:43:48.004988  306754 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:43:48.004994  306754 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:43:48.005008  306754 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:43:48.005016  306754 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:43:48.005024  306754 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:43:48.005585  306754 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:43:48.017745  306754 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:43:48.017765  306754 kubeadm.go:593] duration metric: took 26.677083ms to restartPrimaryControlPlane
	I0919 22:43:48.017773  306754 kubeadm.go:394] duration metric: took 74.972941ms to StartCluster
	I0919 22:43:48.017788  306754 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:48.017861  306754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:43:48.018454  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:48.018701  306754 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:43:48.018725  306754 start.go:241] waiting for startup goroutines ...
	I0919 22:43:48.018733  306754 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:43:48.018963  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:48.021382  306754 out.go:179] * Enabled addons: 
	I0919 22:43:48.023812  306754 addons.go:514] duration metric: took 5.072681ms for enable addons: enabled=[]
	I0919 22:43:48.023850  306754 start.go:246] waiting for cluster config update ...
	I0919 22:43:48.023859  306754 start.go:255] writing updated cluster config ...
	I0919 22:43:48.025343  306754 out.go:203] 
	I0919 22:43:48.026820  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:48.026943  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:48.028653  306754 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:43:48.030026  306754 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:43:48.032033  306754 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:43:48.033838  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:43:48.033864  306754 cache.go:58] Caching tarball of preloaded images
	I0919 22:43:48.033914  306754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:43:48.033952  306754 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:43:48.033963  306754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:43:48.034087  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:48.058396  306754 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:43:48.058420  306754 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:43:48.058439  306754 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:43:48.058476  306754 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:48.058558  306754 start.go:364] duration metric: took 57.011µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:43:48.058584  306754 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:48.058591  306754 fix.go:54] fixHost starting: m02
	I0919 22:43:48.058862  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:43:48.079371  306754 fix.go:112] recreateIfNeeded on ha-434755-m02: state=Stopped err=<nil>
	W0919 22:43:48.079401  306754 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:48.081024  306754 out.go:252] * Restarting existing docker container for "ha-434755-m02" ...
	I0919 22:43:48.081116  306754 cli_runner.go:164] Run: docker start ha-434755-m02
	I0919 22:43:48.376097  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:43:48.396423  306754 kic.go:430] container "ha-434755-m02" state is running.
	I0919 22:43:48.396785  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:48.415958  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:48.416258  306754 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:48.416329  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:48.434784  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:48.435113  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:48.435136  306754 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:48.435824  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56880->127.0.0.1:32843: read: connection reset by peer
	I0919 22:43:51.607226  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:43:51.607258  306754 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:43:51.607316  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:51.633598  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:51.633884  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:51.633938  306754 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:43:51.805490  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:43:51.805587  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:51.826450  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:51.826760  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:51.826788  306754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:43:51.970064  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:51.970101  306754 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:43:51.970124  306754 ubuntu.go:190] setting up certificates
	I0919 22:43:51.970136  306754 provision.go:84] configureAuth start
	I0919 22:43:51.970188  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:51.988229  306754 provision.go:143] copyHostCerts
	I0919 22:43:51.988268  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:51.988319  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:43:51.988330  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:51.988413  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:43:51.988530  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:51.988559  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:43:51.988569  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:51.988615  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:43:51.988682  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:51.988707  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:43:51.988716  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:51.988751  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:43:51.988819  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:43:52.050577  306754 provision.go:177] copyRemoteCerts
	I0919 22:43:52.050643  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:43:52.050694  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.067930  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:52.167387  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:43:52.167461  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:43:52.195400  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:43:52.195494  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:43:52.243020  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:43:52.243105  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:43:52.268296  306754 provision.go:87] duration metric: took 298.143794ms to configureAuth
	I0919 22:43:52.268326  306754 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:43:52.268617  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:52.268672  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.290487  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:52.290785  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:52.290806  306754 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:43:52.436575  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:43:52.436601  306754 ubuntu.go:71] root file system type: overlay
	I0919 22:43:52.436754  306754 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:43:52.436845  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.458543  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:52.458862  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:52.458970  306754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:43:52.619127  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:43:52.619226  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.644964  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:52.645263  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:52.645292  306754 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:43:52.829696  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:52.829732  306754 machine.go:96] duration metric: took 4.413457378s to provisionDockerMachine
	I0919 22:43:52.829747  306754 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:43:52.829761  306754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:43:52.829855  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:43:52.829911  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.856939  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:52.971256  306754 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:43:52.978974  306754 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:43:52.979019  306754 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:43:52.979032  306754 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:43:52.979041  306754 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:43:52.979055  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:43:52.979117  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:43:52.979236  306754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:43:52.979256  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:43:52.979456  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:43:52.995447  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:43:53.038308  306754 start.go:296] duration metric: took 208.542001ms for postStartSetup
	I0919 22:43:53.038394  306754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:53.038452  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:53.064431  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:53.171228  306754 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:53.177085  306754 fix.go:56] duration metric: took 5.118486555s for fixHost
	I0919 22:43:53.177114  306754 start.go:83] releasing machines lock for "ha-434755-m02", held for 5.118539892s
	I0919 22:43:53.177184  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:53.204531  306754 out.go:179] * Found network options:
	I0919 22:43:53.205707  306754 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:43:53.206847  306754 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:43:53.206895  306754 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:43:53.207000  306754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:43:53.207055  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:53.207652  306754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:43:53.207806  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:53.235982  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:53.236550  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:53.441884  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:43:53.469344  306754 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:43:53.469423  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:43:53.482231  306754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:43:53.482266  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:53.482302  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:53.482432  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:53.505978  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:43:53.519731  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:43:53.533562  306754 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:43:53.533642  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:43:53.547659  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:53.562526  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:43:53.576145  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:53.589986  306754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:43:53.600894  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:43:53.613432  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:43:53.626414  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:43:53.637253  306754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:43:53.648221  306754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:43:53.661018  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:53.822661  306754 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:43:54.019194  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:54.019260  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:54.019325  306754 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:43:54.032655  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:54.044990  306754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:43:54.063162  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:54.074199  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:43:54.085305  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:54.102241  306754 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:43:54.105666  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:43:54.114090  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:43:54.132600  306754 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:43:54.260538  306754 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:43:54.391535  306754 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:43:54.391578  306754 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:43:54.413001  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:43:54.424344  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:54.544952  306754 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:44:20.727235  306754 ssh_runner.go:235] Completed: sudo systemctl restart docker: (26.182243315s)
	I0919 22:44:20.727357  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:44:20.757386  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:44:20.778539  306754 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:44:20.809547  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:44:20.829218  306754 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:44:20.990462  306754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:44:21.122804  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:21.270361  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:44:21.303663  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:44:21.327719  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:21.470493  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:44:21.607780  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:44:21.630475  306754 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:44:21.630569  306754 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:44:21.636470  306754 start.go:563] Will wait 60s for crictl version
	I0919 22:44:21.636546  306754 ssh_runner.go:195] Run: which crictl
	I0919 22:44:21.642013  306754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:44:21.708621  306754 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:44:21.708700  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:44:21.745948  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:44:21.791651  306754 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:44:21.792927  306754 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:44:21.794235  306754 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:44:21.824158  306754 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:44:21.830914  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:44:21.849027  306754 mustload.go:65] Loading cluster: ha-434755
	I0919 22:44:21.849434  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:44:21.850149  306754 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:44:21.882961  306754 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:44:21.883657  306754 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:44:21.883744  306754 certs.go:194] generating shared ca certs ...
	I0919 22:44:21.883768  306754 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:44:21.884113  306754 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:44:21.884203  306754 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:44:21.884215  306754 certs.go:256] generating profile certs ...
	I0919 22:44:21.884312  306754 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:44:21.884376  306754 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:44:21.884420  306754 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:44:21.884432  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:44:21.884449  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:44:21.884461  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:44:21.884474  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:44:21.884487  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:44:21.884533  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:44:21.884551  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:44:21.884564  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:44:21.884619  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:44:21.884655  306754 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:44:21.884665  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:44:21.884696  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:44:21.884724  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:44:21.884751  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:44:21.884806  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:44:21.884844  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:44:21.884861  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:21.884877  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:44:21.884941  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:44:21.919935  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:44:22.033094  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:44:22.044103  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:44:22.087129  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:44:22.100830  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:44:22.140176  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:44:22.151511  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:44:22.180512  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:44:22.191050  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:44:22.232218  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:44:22.246764  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:44:22.284941  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:44:22.293161  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:44:22.330676  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:44:22.408749  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:44:22.470668  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:44:22.530262  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:44:22.590668  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:44:22.649927  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:44:22.745218  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:44:22.799341  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:44:22.854479  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:44:22.916815  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:44:22.986618  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:44:23.066105  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:44:23.128853  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:44:23.196912  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:44:23.238915  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:44:23.303660  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:44:23.346103  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:44:23.386289  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:44:23.418560  306754 ssh_runner.go:195] Run: openssl version
	I0919 22:44:23.428674  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:44:23.449549  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:44:23.458143  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:44:23.458211  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:44:23.469982  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:44:23.484616  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:44:23.499402  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:44:23.506645  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:44:23.506733  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:44:23.517121  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:44:23.533092  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:44:23.549662  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:23.557627  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:23.557678  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:23.567422  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:44:23.580227  306754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:44:23.585484  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:44:23.595352  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:44:23.604710  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:44:23.613959  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:44:23.623804  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:44:23.635631  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:44:23.645350  306754 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:44:23.645589  306754 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:44:23.645640  306754 kube-vip.go:115] generating kube-vip config ...
	I0919 22:44:23.645686  306754 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:44:23.663723  306754 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:44:23.663787  306754 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:44:23.663834  306754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:44:23.677238  306754 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:44:23.677352  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:44:23.689915  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:44:23.718367  306754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:44:23.744992  306754 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:44:23.775604  306754 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:44:23.782963  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:44:23.801416  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:24.017559  306754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:44:24.043227  306754 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:44:24.043612  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:44:24.046279  306754 out.go:179] * Verifying Kubernetes components...
	I0919 22:44:24.047278  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:24.245342  306754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:44:24.270301  306754 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:44:24.270404  306754 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:44:24.270772  306754 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:44:31.544458  306754 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:44:31.544524  306754 node_ready.go:38] duration metric: took 7.273702746s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:44:31.544551  306754 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:44:31.544614  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:32.044840  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:32.544670  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:33.044936  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:33.545700  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:34.044733  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:34.545290  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:35.045175  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:35.545700  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:35.558902  306754 api_server.go:72] duration metric: took 11.515208288s to wait for apiserver process to appear ...
	I0919 22:44:35.558925  306754 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:44:35.558943  306754 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:44:35.564007  306754 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:44:35.564963  306754 api_server.go:141] control plane version: v1.34.0
	I0919 22:44:35.564986  306754 api_server.go:131] duration metric: took 6.054881ms to wait for apiserver health ...
	I0919 22:44:35.564996  306754 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:44:35.569458  306754 system_pods.go:59] 17 kube-system pods found
	I0919 22:44:35.569484  306754 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:44:35.569492  306754 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:44:35.569514  306754 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:44:35.569520  306754 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:44:35.569525  306754 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:44:35.569529  306754 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:44:35.569534  306754 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:44:35.569540  306754 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:44:35.569550  306754 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:44:35.569564  306754 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:44:35.569569  306754 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:44:35.569576  306754 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:44:35.569581  306754 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:44:35.569586  306754 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:44:35.569596  306754 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0919 22:44:35.569602  306754 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:44:35.569609  306754 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:44:35.569650  306754 system_pods.go:74] duration metric: took 4.64653ms to wait for pod list to return data ...
	I0919 22:44:35.569660  306754 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:44:35.572028  306754 default_sa.go:45] found service account: "default"
	I0919 22:44:35.572046  306754 default_sa.go:55] duration metric: took 2.375873ms for default service account to be created ...
	I0919 22:44:35.572055  306754 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:44:35.575284  306754 system_pods.go:86] 17 kube-system pods found
	I0919 22:44:35.575302  306754 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:44:35.575307  306754 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:44:35.575311  306754 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:44:35.575314  306754 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:44:35.575318  306754 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:44:35.575321  306754 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:44:35.575324  306754 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:44:35.575327  306754 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:44:35.575331  306754 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:44:35.575338  306754 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:44:35.575343  306754 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:44:35.575347  306754 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:44:35.575350  306754 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:44:35.575354  306754 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:44:35.575358  306754 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0919 22:44:35.575362  306754 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:44:35.575367  306754 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:44:35.575373  306754 system_pods.go:126] duration metric: took 3.312161ms to wait for k8s-apps to be running ...
	I0919 22:44:35.575382  306754 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:44:35.575419  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:44:35.587035  306754 system_svc.go:56] duration metric: took 11.645688ms WaitForService to wait for kubelet
	I0919 22:44:35.587057  306754 kubeadm.go:578] duration metric: took 11.543372799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:44:35.587077  306754 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:44:35.592372  306754 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:44:35.592397  306754 node_conditions.go:123] node cpu capacity is 8
	I0919 22:44:35.592411  306754 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:44:35.592417  306754 node_conditions.go:123] node cpu capacity is 8
	I0919 22:44:35.592423  306754 node_conditions.go:105] duration metric: took 5.340807ms to run NodePressure ...
	I0919 22:44:35.592437  306754 start.go:241] waiting for startup goroutines ...
	I0919 22:44:35.592469  306754 start.go:255] writing updated cluster config ...
	I0919 22:44:35.593840  306754 out.go:203] 
	I0919 22:44:35.595103  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:44:35.595225  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:44:35.596740  306754 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:44:35.597955  306754 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:44:35.598928  306754 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:44:35.599836  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:44:35.599853  306754 cache.go:58] Caching tarball of preloaded images
	I0919 22:44:35.599867  306754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:44:35.599952  306754 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:44:35.599968  306754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:44:35.600070  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:44:35.618954  306754 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:44:35.618971  306754 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:44:35.618985  306754 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:44:35.619006  306754 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:44:35.619056  306754 start.go:364] duration metric: took 34.434µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:44:35.619073  306754 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:44:35.619078  306754 fix.go:54] fixHost starting: m04
	I0919 22:44:35.619277  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:44:35.635488  306754 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:44:35.635522  306754 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:44:35.636937  306754 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:44:35.636998  306754 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:44:35.880863  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:44:35.900207  306754 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:44:35.900738  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:44:35.920815  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:44:35.921050  306754 machine.go:93] provisionDockerMachine start ...
	I0919 22:44:35.921112  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:44:35.939494  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:44:35.939855  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:44:35.939875  306754 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:44:35.940477  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47578->127.0.0.1:32848: read: connection reset by peer
	I0919 22:44:38.976845  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:42.013674  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:45.050648  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:48.087177  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:51.123806  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:54.159311  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:57.196664  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:00.231780  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:03.268309  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:06.304115  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:09.339820  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:12.377025  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:15.413106  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:18.449198  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:21.486347  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:24.523246  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:27.559102  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:30.595066  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:33.632178  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:36.668095  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:39.705080  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:42.741620  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:45.778094  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:48.814216  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:51.850659  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:54.888764  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:57.926773  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:00.962612  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:04.000597  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:07.037610  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:10.073879  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:13.110354  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:16.147874  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:19.184615  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:22.220478  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:25.255344  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:28.291736  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:31.329368  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:34.365263  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:37.401216  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:40.436801  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:43.474274  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:46.511002  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:49.548640  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:52.587262  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:55.623128  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:58.659480  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:01.696650  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:04.731946  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:07.768095  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:10.804300  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:13.840657  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:16.878024  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:19.912838  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:22.950049  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:25.985035  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:29.020804  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:32.057784  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:35.095114  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:38.095793  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:47:38.095828  306754 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:47:38.095896  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:47:38.114241  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:38.114586  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:47:38.114610  306754 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m04 && echo "ha-434755-m04" | sudo tee /etc/hostname
	I0919 22:47:38.149901  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:41.186255  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:44.223737  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:47.260806  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:50.296562  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:53.335133  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:56.371717  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:59.406991  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:02.443645  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:05.479626  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:08.514740  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:11.552150  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:14.588794  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:17.625824  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:20.661951  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:23.698677  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:26.736808  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:29.772266  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:32.808846  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:35.844845  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:38.880247  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:41.916844  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:44.951964  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:47.987158  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:51.023891  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:54.060750  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:57.098459  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:00.133430  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:03.169755  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:06.205767  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:09.241916  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:12.279154  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:15.314739  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:18.354078  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:21.391146  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:24.426978  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:27.464438  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:30.500003  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:33.536668  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:36.573788  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:39.609153  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:42.644505  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:45.679846  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:48.714985  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:51.753114  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:54.789673  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:57.829152  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:00.866647  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:03.903813  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:06.940767  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:09.977770  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:13.014880  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:16.052297  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:19.088322  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:22.126414  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:25.162071  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:28.198477  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:31.234533  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:34.271823  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:37.308353  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:40.308595  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:50:40.308717  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:50:40.328327  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:50:40.328634  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:50:40.328654  306754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:50:40.364113  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:43.401607  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:46.438588  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:49.474372  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:52.510149  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:55.545433  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:58.582376  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:01.618889  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:04.654718  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:07.689743  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:10.726438  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:13.763371  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:16.799701  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:19.836415  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:22.875036  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:25.910558  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:28.946749  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:31.983660  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:35.019740  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:38.057188  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:41.093531  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:44.130632  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:47.167719  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:50.204000  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:53.242098  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:56.278177  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:59.315114  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:02.351376  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:05.387418  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:08.424418  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:11.461805  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:14.496890  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:17.533764  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:20.569792  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:23.606298  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:26.642016  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:29.679917  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:32.716729  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:35.751860  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:38.788063  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:41.824681  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:44.860632  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:47.896783  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:50.933686  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:53.970455  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:57.007607  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:00.043781  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:03.080464  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:06.116459  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:09.153136  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:12.190750  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:15.226325  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:18.262179  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:21.298840  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:24.334155  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:27.371283  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:30.406705  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:33.443174  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:36.480706  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:39.515984  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:42.518160  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:53:42.518213  306754 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:53:42.518253  306754 ubuntu.go:190] setting up certificates
	I0919 22:53:42.518270  306754 provision.go:84] configureAuth start
	I0919 22:53:42.518345  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:42.536471  306754 provision.go:143] copyHostCerts
	I0919 22:53:42.536541  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:42.536587  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:42.536600  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:42.536699  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:42.536849  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:42.536874  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:42.536881  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:42.536910  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:42.536960  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:42.536976  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:42.536982  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:42.537005  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:42.537075  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:42.931587  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:42.931644  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:42.931681  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:42.949394  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:42.984311  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:42.984346  306754 retry.go:31] will retry after 327.821016ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:43.347560  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:43.347587  306754 retry.go:31] will retry after 243.46549ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:43.627078  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:43.627104  306754 retry.go:31] will retry after 664.059911ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:44.327907  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.328017  306754 retry.go:31] will retry after 359.803869ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.688672  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:44.706219  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:44.741632  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.741661  306754 retry.go:31] will retry after 220.247897ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:44.996988  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.997035  306754 retry.go:31] will retry after 419.776326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:45.452683  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:45.452712  306754 retry.go:31] will retry after 552.672736ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:46.041337  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.041381  306754 retry.go:31] will retry after 500.704026ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:46.578470  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.578589  306754 provision.go:87] duration metric: took 4.060308089s to configureAuth
	W0919 22:53:46.578605  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.578628  306754 retry.go:31] will retry after 84.832µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.579768  306754 provision.go:84] configureAuth start
	I0919 22:53:46.579839  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:46.596992  306754 provision.go:143] copyHostCerts
	I0919 22:53:46.597027  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:46.597061  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:46.597072  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:46.597124  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:46.597253  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:46.597282  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:46.597289  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:46.597314  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:46.597367  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:46.597384  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:46.597389  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:46.597408  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:46.597479  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:46.734391  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:46.734445  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:46.734480  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:46.751738  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:46.786763  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.786799  306754 retry.go:31] will retry after 343.684216ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:47.166247  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:47.166274  306754 retry.go:31] will retry after 217.133577ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:47.420746  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:47.420780  306754 retry.go:31] will retry after 498.567333ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:47.955439  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:47.955479  306754 retry.go:31] will retry after 494.414185ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:48.486082  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.486169  306754 retry.go:31] will retry after 171.267823ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.658623  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:48.675595  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:48.710840  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.710867  306754 retry.go:31] will retry after 201.247835ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:48.946825  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.946857  306754 retry.go:31] will retry after 359.387077ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:49.341697  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:49.341725  306754 retry.go:31] will retry after 422.852532ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:49.800193  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:49.800226  306754 retry.go:31] will retry after 732.23205ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:50.569169  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.569273  306754 provision.go:87] duration metric: took 3.98948408s to configureAuth
	W0919 22:53:50.569284  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.569299  306754 retry.go:31] will retry after 150.475µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.570391  306754 provision.go:84] configureAuth start
	I0919 22:53:50.570482  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:50.589453  306754 provision.go:143] copyHostCerts
	I0919 22:53:50.589488  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:50.589595  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:50.589615  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:50.589694  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:50.589786  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:50.589811  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:50.589820  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:50.589854  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:50.589919  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:50.589945  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:50.589951  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:50.589983  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:50.590079  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:50.723808  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:50.723874  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:50.723919  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:50.741265  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:50.776681  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.776711  306754 retry.go:31] will retry after 242.012835ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:51.054160  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:51.054189  306754 retry.go:31] will retry after 469.918328ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:51.560111  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:51.560142  306754 retry.go:31] will retry after 806.884367ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:52.403950  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.404042  306754 retry.go:31] will retry after 174.387519ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.579469  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:52.598080  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:52.634064  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.634093  306754 retry.go:31] will retry after 145.829901ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:52.815524  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.815556  306754 retry.go:31] will retry after 498.800271ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:53.351527  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:53.351560  306754 retry.go:31] will retry after 373.407394ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:53.760023  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:53.760058  306754 retry.go:31] will retry after 694.32313ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:54.489838  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.489939  306754 provision.go:87] duration metric: took 3.919518578s to configureAuth
	W0919 22:53:54.489953  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.489980  306754 retry.go:31] will retry after 156.391µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.491170  306754 provision.go:84] configureAuth start
	I0919 22:53:54.491235  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:54.507800  306754 provision.go:143] copyHostCerts
	I0919 22:53:54.507832  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:54.507856  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:54.507865  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:54.507917  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:54.507999  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:54.508016  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:54.508025  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:54.508046  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:54.508134  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:54.508160  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:54.508172  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:54.508194  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:54.508255  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:54.702308  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:54.702363  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:54.702402  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:54.719508  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:54.754479  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.754532  306754 retry.go:31] will retry after 262.57616ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:55.054473  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:55.054534  306754 retry.go:31] will retry after 410.205034ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:55.499921  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:55.499953  306754 retry.go:31] will retry after 516.948693ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:56.052821  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.052920  306754 retry.go:31] will retry after 287.471529ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.341489  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:56.359419  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:56.395053  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.395085  306754 retry.go:31] will retry after 362.750816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:56.793926  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.793959  306754 retry.go:31] will retry after 405.598886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:57.235521  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:57.235550  306754 retry.go:31] will retry after 354.631954ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:57.627139  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:57.627176  306754 retry.go:31] will retry after 562.91369ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:58.226126  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.226210  306754 provision.go:87] duration metric: took 3.735019016s to configureAuth
	W0919 22:53:58.226219  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.226245  306754 retry.go:31] will retry after 277.766µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.227384  306754 provision.go:84] configureAuth start
	I0919 22:53:58.227448  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:58.244327  306754 provision.go:143] copyHostCerts
	I0919 22:53:58.244360  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:58.244387  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:58.244399  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:58.244460  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:58.244571  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:58.244592  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:58.244596  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:58.244620  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:58.244684  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:58.244701  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:58.244707  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:58.244726  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:58.244820  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:58.526249  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:58.526305  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:58.526339  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:58.544162  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:58.580810  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.580834  306754 retry.go:31] will retry after 244.293404ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:58.861398  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.861432  306754 retry.go:31] will retry after 274.454092ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:59.172246  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:59.172275  306754 retry.go:31] will retry after 475.218135ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:59.682695  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:59.682786  306754 retry.go:31] will retry after 366.451516ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.050408  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:00.068885  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:00.104639  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.104667  306754 retry.go:31] will retry after 245.587287ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:00.386000  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.386029  306754 retry.go:31] will retry after 347.162049ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:00.768436  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.768468  306754 retry.go:31] will retry after 475.508039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:01.279090  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.279200  306754 provision.go:87] duration metric: took 3.05179768s to configureAuth
	W0919 22:54:01.279212  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.279227  306754 retry.go:31] will retry after 673.05µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.280405  306754 provision.go:84] configureAuth start
	I0919 22:54:01.280490  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:01.298157  306754 provision.go:143] copyHostCerts
	I0919 22:54:01.298201  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:01.298247  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:01.298259  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:01.298342  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:01.298442  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:01.298476  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:01.298487  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:01.298552  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:01.298643  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:01.298669  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:01.298679  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:01.298710  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:01.298801  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:01.568200  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:01.568271  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:01.568319  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:01.586091  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:01.621653  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.621687  306754 retry.go:31] will retry after 250.678085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:01.908948  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.908990  306754 retry.go:31] will retry after 380.583231ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:02.325550  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:02.325585  306754 retry.go:31] will retry after 757.589746ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:03.118940  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.119032  306754 retry.go:31] will retry after 297.891821ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.417585  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:03.435527  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:03.470577  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.470608  306754 retry.go:31] will retry after 135.697801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:03.641710  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.641743  306754 retry.go:31] will retry after 339.0934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:04.015950  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:04.015984  306754 retry.go:31] will retry after 772.616366ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:04.824951  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:04.824980  306754 retry.go:31] will retry after 516.227388ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:05.376717  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.376824  306754 provision.go:87] duration metric: took 4.096399764s to configureAuth
	W0919 22:54:05.376836  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.376847  306754 retry.go:31] will retry after 386.581µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.378139  306754 provision.go:84] configureAuth start
	I0919 22:54:05.378216  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:05.395262  306754 provision.go:143] copyHostCerts
	I0919 22:54:05.395294  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:05.395318  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:05.395326  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:05.395380  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:05.395528  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:05.395554  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:05.395562  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:05.395588  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:05.395653  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:05.395671  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:05.395674  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:05.395694  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:05.395786  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:05.584739  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:05.584799  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:05.584847  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:05.602411  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:05.637553  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.637578  306754 retry.go:31] will retry after 208.291934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:05.881825  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.881858  306754 retry.go:31] will retry after 455.61088ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:06.374930  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:06.374964  306754 retry.go:31] will retry after 825.914647ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:07.236166  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.236241  306754 retry.go:31] will retry after 251.800701ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.488767  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:07.506531  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:07.542053  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.542086  306754 retry.go:31] will retry after 217.319386ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:07.795257  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.795290  306754 retry.go:31] will retry after 208.063886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:08.039248  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.039283  306754 retry.go:31] will retry after 651.900068ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:08.727030  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.727113  306754 provision.go:87] duration metric: took 3.348957352s to configureAuth
	W0919 22:54:08.727125  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.727139  306754 retry.go:31] will retry after 1.333904ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.729326  306754 provision.go:84] configureAuth start
	I0919 22:54:08.729395  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:08.746263  306754 provision.go:143] copyHostCerts
	I0919 22:54:08.746299  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:08.746330  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:08.746341  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:08.746408  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:08.746536  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:08.746561  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:08.746569  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:08.746594  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:08.746665  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:08.746682  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:08.746688  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:08.746708  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:08.746771  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:08.899961  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:08.900036  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:08.900088  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:08.916656  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:08.952077  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.952107  306754 retry.go:31] will retry after 333.635936ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:09.322368  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:09.322397  306754 retry.go:31] will retry after 351.188839ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:09.709321  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:09.709351  306754 retry.go:31] will retry after 424.380279ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:10.169679  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.169706  306754 retry.go:31] will retry after 622.981079ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:10.828443  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.828560  306754 provision.go:87] duration metric: took 2.09922013s to configureAuth
	W0919 22:54:10.828575  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.828586  306754 retry.go:31] will retry after 1.922293ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.830780  306754 provision.go:84] configureAuth start
	I0919 22:54:10.830861  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:10.849570  306754 provision.go:143] copyHostCerts
	I0919 22:54:10.849610  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:10.849637  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:10.849647  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:10.849698  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:10.849783  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:10.849806  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:10.849812  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:10.849876  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:10.849946  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:10.849963  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:10.849969  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:10.849989  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:10.850059  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:11.073047  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:11.073102  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:11.073135  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:11.090381  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:11.126669  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:11.126695  306754 retry.go:31] will retry after 314.361348ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:11.477730  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:11.477758  306754 retry.go:31] will retry after 260.511886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:11.774311  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:11.774338  306754 retry.go:31] will retry after 432.523136ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:12.242876  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.242903  306754 retry.go:31] will retry after 624.693112ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:12.904153  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.904249  306754 provision.go:87] duration metric: took 2.073448479s to configureAuth
	W0919 22:54:12.904264  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.904278  306754 retry.go:31] will retry after 1.348392ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.906475  306754 provision.go:84] configureAuth start
	I0919 22:54:12.906566  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:12.923064  306754 provision.go:143] copyHostCerts
	I0919 22:54:12.923095  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:12.923120  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:12.923125  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:12.923171  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:12.923262  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:12.923284  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:12.923288  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:12.923309  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:12.923365  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:12.923382  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:12.923385  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:12.923403  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:12.923470  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:13.039711  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:13.039763  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:13.039805  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:13.056853  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:13.092736  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.092761  306754 retry.go:31] will retry after 176.485068ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:13.305354  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.305378  306754 retry.go:31] will retry after 493.048592ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:13.833852  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.833879  306754 retry.go:31] will retry after 577.272179ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:14.446849  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:14.446934  306754 retry.go:31] will retry after 370.926084ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:14.818553  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:14.836147  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:14.871457  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:14.871483  306754 retry.go:31] will retry after 208.784174ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:15.116890  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:15.116922  306754 retry.go:31] will retry after 431.415105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:15.584759  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:15.584793  306754 retry.go:31] will retry after 369.293791ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:15.989470  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:15.989526  306754 retry.go:31] will retry after 747.230625ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:16.771900  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.772010  306754 provision.go:87] duration metric: took 3.865514416s to configureAuth
	W0919 22:54:16.772022  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.772038  306754 retry.go:31] will retry after 5.016981ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.777965  306754 provision.go:84] configureAuth start
	I0919 22:54:16.778044  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:16.796080  306754 provision.go:143] copyHostCerts
	I0919 22:54:16.796115  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:16.796150  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:16.796160  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:16.796216  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:16.796282  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:16.796300  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:16.796306  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:16.796327  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:16.796366  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:16.796383  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:16.796389  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:16.796407  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:16.796452  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:16.908698  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:16.908757  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:16.908790  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:16.925506  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:16.960836  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.960866  306754 retry.go:31] will retry after 214.400755ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:17.211378  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:17.211405  306754 retry.go:31] will retry after 230.919633ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:17.477471  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:17.477521  306754 retry.go:31] will retry after 339.325482ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:17.851812  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:17.851846  306754 retry.go:31] will retry after 899.158848ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:18.786166  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:18.786283  306754 provision.go:87] duration metric: took 2.008295325s to configureAuth
	W0919 22:54:18.786296  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:18.786312  306754 retry.go:31] will retry after 5.67967ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:18.792526  306754 provision.go:84] configureAuth start
	I0919 22:54:18.792605  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:18.810194  306754 provision.go:143] copyHostCerts
	I0919 22:54:18.810225  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:18.810251  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:18.810259  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:18.810312  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:18.810403  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:18.810421  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:18.810424  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:18.810448  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:18.810523  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:18.810550  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:18.810554  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:18.810577  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:18.810646  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:19.258474  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:19.258556  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:19.258602  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:19.276208  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:19.312202  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:19.312241  306754 retry.go:31] will retry after 285.588365ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:19.633811  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:19.633842  306754 retry.go:31] will retry after 308.066017ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:19.977393  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:19.977427  306754 retry.go:31] will retry after 525.368758ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:20.540215  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:20.540252  306754 retry.go:31] will retry after 582.70145ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-434755 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-434755
helpers_test.go:243: (dbg) docker inspect ha-434755:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	        "Created": "2025-09-19T22:24:25.435908216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:43:39.507605151Z",
	            "FinishedAt": "2025-09-19T22:43:38.824545757Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/hosts",
	        "LogPath": "/var/lib/docker/containers/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e/3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e-json.log",
	        "Name": "/ha-434755",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-434755:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-434755",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3c5829252b8b881f15f3c54c4ba70d1490c8ac9fbae20a31fdf9d65226d1379e",
	                "LowerDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa8484ef68691db024ec039bfca147494e07d923a6d3b6608b222c7b12e4a90c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-434755",
	                "Source": "/var/lib/docker/volumes/ha-434755/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-434755",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-434755",
	                "name.minikube.sigs.k8s.io": "ha-434755",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e91947ec028474ca17ae18faf93277f9091f8f3517bb382ae694c9454039ce2",
	            "SandboxKey": "/var/run/docker/netns/8e91947ec028",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32842"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32841"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-434755": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:ae:6a:d4:70:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "db70212208592ba3a09cb1094d6c6cf228f6e4f0d26c9a33f52f5ec9e3d42878",
	                    "EndpointID": "ba9348a6cfd243dfc67191a2b619bed3d6ffd595af259ee4c7c74844ab0e270e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-434755",
	                        "3c5829252b8b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-434755 -n ha-434755
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 logs -n 25: (1.161645214s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-434755 cp ha-434755-m03:/home/docker/cp-test.txt ha-434755-m04:/home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test_ha-434755-m03_ha-434755-m04.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp testdata/cp-test.txt ha-434755-m04:/home/docker/cp-test.txt                                                            │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile953154305/001/cp-test_ha-434755-m04.txt │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755:/home/docker/cp-test_ha-434755-m04_ha-434755.txt                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755.txt                                                │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m02:/home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m02 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m02.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ cp      │ ha-434755 cp ha-434755-m04:/home/docker/cp-test.txt ha-434755-m03:/home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt              │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh     │ ha-434755 ssh -n ha-434755-m03 sudo cat /home/docker/cp-test_ha-434755-m04_ha-434755-m03.txt                                        │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ node    │ ha-434755 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ node    │ ha-434755 node start m02 --alsologtostderr -v 5                                                                                     │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ stop    │ ha-434755 stop --alsologtostderr -v 5                                                                                               │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:34 UTC │
	│ start   │ ha-434755 start --wait true --alsologtostderr -v 5                                                                                  │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:34 UTC │                     │
	│ node    │ ha-434755 node list --alsologtostderr -v 5                                                                                          │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │                     │
	│ node    │ ha-434755 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │ 19 Sep 25 22:43 UTC │
	│ stop    │ ha-434755 stop --alsologtostderr -v 5                                                                                               │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │ 19 Sep 25 22:43 UTC │
	│ start   │ ha-434755 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker                                      │ ha-434755 │ jenkins │ v1.37.0 │ 19 Sep 25 22:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:43:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:43:39.291527  306754 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:43:39.291792  306754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:39.291803  306754 out.go:374] Setting ErrFile to fd 2...
	I0919 22:43:39.291807  306754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:39.291977  306754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:43:39.292414  306754 out.go:368] Setting JSON to false
	I0919 22:43:39.293376  306754 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5155,"bootTime":1758316664,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:43:39.293466  306754 start.go:140] virtualization: kvm guest
	I0919 22:43:39.295239  306754 out.go:179] * [ha-434755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:43:39.296330  306754 notify.go:220] Checking for updates...
	I0919 22:43:39.296345  306754 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:43:39.297493  306754 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:43:39.298603  306754 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:43:39.299685  306754 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:43:39.300719  306754 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:43:39.301699  306754 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:43:39.304205  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:39.304960  306754 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:43:39.330266  306754 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:43:39.330337  306754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:43:39.386701  306754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:43:39.376744233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:43:39.386820  306754 docker.go:318] overlay module found
	I0919 22:43:39.388240  306754 out.go:179] * Using the docker driver based on existing profile
	I0919 22:43:39.389026  306754 start.go:304] selected driver: docker
	I0919 22:43:39.389036  306754 start.go:918] validating driver "docker" against &{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fa
lse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:43:39.389153  306754 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:43:39.389237  306754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:43:39.443590  306754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:43:39.432336958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:43:39.444168  306754 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:43:39.444201  306754 cni.go:84] Creating CNI manager for ""
	I0919 22:43:39.444262  306754 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:43:39.444310  306754 start.go:348] cluster config:
	{Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvid
ia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:43:39.445658  306754 out.go:179] * Starting "ha-434755" primary control-plane node in "ha-434755" cluster
	I0919 22:43:39.446485  306754 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:43:39.447344  306754 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:43:39.448169  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:43:39.448218  306754 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:43:39.448232  306754 cache.go:58] Caching tarball of preloaded images
	I0919 22:43:39.448266  306754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:43:39.448335  306754 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:43:39.448347  306754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:43:39.448491  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:39.467255  306754 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:43:39.467272  306754 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:43:39.467293  306754 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:43:39.467321  306754 start.go:360] acquireMachinesLock for ha-434755: {Name:mkbee2b246a2c7257f14e13c0a2cc8098703a645 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:39.467379  306754 start.go:364] duration metric: took 36.929µs to acquireMachinesLock for "ha-434755"
	I0919 22:43:39.467400  306754 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:39.467411  306754 fix.go:54] fixHost starting: 
	I0919 22:43:39.467648  306754 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:43:39.483723  306754 fix.go:112] recreateIfNeeded on ha-434755: state=Stopped err=<nil>
	W0919 22:43:39.483782  306754 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:39.485185  306754 out.go:252] * Restarting existing docker container for "ha-434755" ...
	I0919 22:43:39.485264  306754 cli_runner.go:164] Run: docker start ha-434755
	I0919 22:43:39.702988  306754 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:43:39.721012  306754 kic.go:430] container "ha-434755" state is running.
	I0919 22:43:39.721394  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:39.738252  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:39.738464  306754 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:39.738564  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:39.756374  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:39.756640  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:39.756655  306754 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:39.757274  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53890->127.0.0.1:32838: read: connection reset by peer
	I0919 22:43:42.892336  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:43:42.892367  306754 ubuntu.go:182] provisioning hostname "ha-434755"
	I0919 22:43:42.892421  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:42.910465  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:42.910692  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:42.910707  306754 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755 && echo "ha-434755" | sudo tee /etc/hostname
	I0919 22:43:43.055420  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755
	
	I0919 22:43:43.055518  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:43.072353  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:43.072584  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:43.072601  306754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:43:43.205696  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:43.205737  306754 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:43:43.205755  306754 ubuntu.go:190] setting up certificates
	I0919 22:43:43.205765  306754 provision.go:84] configureAuth start
	I0919 22:43:43.205813  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:43.223226  306754 provision.go:143] copyHostCerts
	I0919 22:43:43.223281  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:43.223330  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:43:43.223350  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:43.223439  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:43:43.223611  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:43.223651  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:43:43.223662  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:43.223708  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:43:43.223777  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:43.223801  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:43:43.223810  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:43.223846  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:43:43.223915  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755 san=[127.0.0.1 192.168.49.2 ha-434755 localhost minikube]
	I0919 22:43:43.965915  306754 provision.go:177] copyRemoteCerts
	I0919 22:43:43.965993  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:43:43.966049  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:43.983465  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.078601  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:43:44.078662  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:43:44.101554  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:43:44.101604  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:43:44.124200  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:43:44.124267  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:43:44.146653  306754 provision.go:87] duration metric: took 940.871108ms to configureAuth
	I0919 22:43:44.146681  306754 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:43:44.146886  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:44.146935  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.163438  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:44.163672  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:44.163685  306754 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:43:44.295935  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:43:44.295957  306754 ubuntu.go:71] root file system type: overlay
	I0919 22:43:44.296086  306754 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:43:44.296154  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.312772  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:44.313045  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:44.313156  306754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:43:44.456912  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:43:44.456987  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.473755  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:44.473964  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:43:44.473981  306754 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:43:44.610584  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:44.610613  306754 machine.go:96] duration metric: took 4.872132827s to provisionDockerMachine
	I0919 22:43:44.610629  306754 start.go:293] postStartSetup for "ha-434755" (driver="docker")
	I0919 22:43:44.610644  306754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:43:44.610702  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:43:44.610742  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.627928  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.723800  306754 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:43:44.726896  306754 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:43:44.726923  306754 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:43:44.726930  306754 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:43:44.726938  306754 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:43:44.726949  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:43:44.726998  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:43:44.727084  306754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:43:44.727097  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:43:44.727179  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:43:44.735596  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:43:44.759329  306754 start.go:296] duration metric: took 148.683381ms for postStartSetup
	I0919 22:43:44.759401  306754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:44.759446  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.776107  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.867158  306754 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:44.871450  306754 fix.go:56] duration metric: took 5.40403423s for fixHost
	I0919 22:43:44.871474  306754 start.go:83] releasing machines lock for "ha-434755", held for 5.404084037s
	I0919 22:43:44.871564  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755
	I0919 22:43:44.888349  306754 ssh_runner.go:195] Run: cat /version.json
	I0919 22:43:44.888391  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.888423  306754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:43:44.888478  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:43:44.906330  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:44.906450  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:43:45.067442  306754 ssh_runner.go:195] Run: systemctl --version
	I0919 22:43:45.072316  306754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:43:45.076762  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:43:45.095068  306754 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:43:45.095126  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:43:45.103588  306754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:43:45.103614  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:45.103647  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:45.103772  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:45.119318  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:43:45.128686  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:43:45.137849  306754 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:43:45.137901  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:43:45.147058  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:45.156204  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:43:45.165069  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:45.174076  306754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:43:45.182617  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:43:45.191827  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:43:45.200803  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:43:45.210038  306754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:43:45.217896  306754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:43:45.225661  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:45.290430  306754 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:43:45.365571  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:45.365619  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:45.365667  306754 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:43:45.378147  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:45.388969  306754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:43:45.403457  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:45.413886  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:43:45.424777  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:45.440560  306754 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:43:45.443748  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:43:45.451757  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:43:45.468855  306754 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:43:45.535439  306754 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:43:45.595832  306754 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:43:45.595947  306754 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:43:45.613447  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:43:45.623701  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:45.684600  306754 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:43:46.473688  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:43:46.484847  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:43:46.495132  306754 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:43:46.506171  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:43:46.516348  306754 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:43:46.580356  306754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:43:46.646484  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:46.710711  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:43:46.735360  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:43:46.745865  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:46.810610  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:43:46.888676  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:43:46.900040  306754 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:43:46.900100  306754 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:43:46.903517  306754 start.go:563] Will wait 60s for crictl version
	I0919 22:43:46.903571  306754 ssh_runner.go:195] Run: which crictl
	I0919 22:43:46.906866  306754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:43:46.941336  306754 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:43:46.941405  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:43:46.966952  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:43:46.993474  306754 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:43:46.993567  306754 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:43:47.011223  306754 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:43:47.015448  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:43:47.027916  306754 kubeadm.go:875] updating cluster {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:43:47.028086  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:43:47.028160  306754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:43:47.048532  306754 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:43:47.048559  306754 docker.go:621] Images already preloaded, skipping extraction
	I0919 22:43:47.048634  306754 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 22:43:47.070048  306754 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	ghcr.io/kube-vip/kube-vip:v1.0.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	kindest/kindnetd:v20250512-df8de77b
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0919 22:43:47.070070  306754 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:43:47.070080  306754 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0919 22:43:47.070188  306754 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:43:47.070235  306754 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 22:43:47.120483  306754 cni.go:84] Creating CNI manager for ""
	I0919 22:43:47.120524  306754 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:43:47.120541  306754 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:43:47.120570  306754 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-434755 NodeName:ha-434755 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/man
ifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:43:47.120727  306754 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-434755"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:43:47.120750  306754 kube-vip.go:115] generating kube-vip config ...
	I0919 22:43:47.120798  306754 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:43:47.133139  306754 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:43:47.133242  306754 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:43:47.133296  306754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:43:47.142163  306754 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:43:47.142230  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:43:47.150294  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0919 22:43:47.167116  306754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:43:47.183593  306754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:43:47.200026  306754 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:43:47.216296  306754 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:43:47.219560  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:43:47.229904  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:47.292236  306754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:43:47.316513  306754 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.2
	I0919 22:43:47.316534  306754 certs.go:194] generating shared ca certs ...
	I0919 22:43:47.316549  306754 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:47.316708  306754 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:43:47.316752  306754 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:43:47.316763  306754 certs.go:256] generating profile certs ...
	I0919 22:43:47.316834  306754 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:43:47.316856  306754 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e
	I0919 22:43:47.316868  306754 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:43:47.496821  306754 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e ...
	I0919 22:43:47.496848  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e: {Name:mk87454dee6a5f83a043f9122902a6a0c377141b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:47.496989  306754 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e ...
	I0919 22:43:47.497001  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e: {Name:mk152a431c9e22f2691899ae04ddcffa44174e39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:47.497080  306754 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt.ae12ef2e -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt
	I0919 22:43:47.497202  306754 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.ae12ef2e -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key
	I0919 22:43:47.497333  306754 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:43:47.497352  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:43:47.497369  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:43:47.497389  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:43:47.497405  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:43:47.497416  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:43:47.497435  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:43:47.497453  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:43:47.497471  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:43:47.497543  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:43:47.497587  306754 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:43:47.497604  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:43:47.497634  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:43:47.497662  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:43:47.497693  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:43:47.497746  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:43:47.497786  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.497805  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.497825  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.498546  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:43:47.531566  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:43:47.558416  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:43:47.584122  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:43:47.609680  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:43:47.635691  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:43:47.658751  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:43:47.681516  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:43:47.703759  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:43:47.726075  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:43:47.748426  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:43:47.770942  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:43:47.787594  306754 ssh_runner.go:195] Run: openssl version
	I0919 22:43:47.792671  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:43:47.801510  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.804786  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.804830  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:43:47.810997  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:43:47.819279  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:43:47.829985  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.834062  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.834119  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:43:47.842120  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:43:47.853981  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:43:47.865286  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.868927  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.868965  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:43:47.876156  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:43:47.887371  306754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:43:47.891855  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:43:47.902396  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:43:47.912064  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:43:47.921218  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:43:47.929487  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:43:47.936335  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:43:47.942807  306754 kubeadm.go:392] StartCluster: {Name:ha-434755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:43:47.942973  306754 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 22:43:47.976080  306754 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:43:47.991060  306754 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:43:47.991082  306754 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:43:47.991134  306754 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:43:48.002447  306754 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:43:48.002992  306754 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-434755" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:43:48.003190  306754 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "ha-434755" cluster setting kubeconfig missing "ha-434755" context setting]
	I0919 22:43:48.003645  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:48.004375  306754 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:43:48.004967  306754 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:43:48.004988  306754 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:43:48.004994  306754 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:43:48.005008  306754 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:43:48.005016  306754 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:43:48.005024  306754 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:43:48.005585  306754 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:43:48.017745  306754 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:43:48.017765  306754 kubeadm.go:593] duration metric: took 26.677083ms to restartPrimaryControlPlane
	I0919 22:43:48.017773  306754 kubeadm.go:394] duration metric: took 74.972941ms to StartCluster
	I0919 22:43:48.017788  306754 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:48.017861  306754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:43:48.018454  306754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:43:48.018701  306754 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:43:48.018725  306754 start.go:241] waiting for startup goroutines ...
	I0919 22:43:48.018733  306754 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:43:48.018963  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:48.021382  306754 out.go:179] * Enabled addons: 
	I0919 22:43:48.023812  306754 addons.go:514] duration metric: took 5.072681ms for enable addons: enabled=[]
	I0919 22:43:48.023850  306754 start.go:246] waiting for cluster config update ...
	I0919 22:43:48.023859  306754 start.go:255] writing updated cluster config ...
	I0919 22:43:48.025343  306754 out.go:203] 
	I0919 22:43:48.026820  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:48.026943  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:48.028653  306754 out.go:179] * Starting "ha-434755-m02" control-plane node in "ha-434755" cluster
	I0919 22:43:48.030026  306754 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:43:48.032033  306754 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:43:48.033838  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:43:48.033864  306754 cache.go:58] Caching tarball of preloaded images
	I0919 22:43:48.033914  306754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:43:48.033952  306754 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:43:48.033963  306754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:43:48.034087  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:48.058396  306754 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:43:48.058420  306754 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:43:48.058439  306754 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:43:48.058476  306754 start.go:360] acquireMachinesLock for ha-434755-m02: {Name:mk9ca5ab09eecc208a09b7d4c6860cdbcbbd1861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:43:48.058558  306754 start.go:364] duration metric: took 57.011µs to acquireMachinesLock for "ha-434755-m02"
	I0919 22:43:48.058584  306754 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:43:48.058591  306754 fix.go:54] fixHost starting: m02
	I0919 22:43:48.058862  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:43:48.079371  306754 fix.go:112] recreateIfNeeded on ha-434755-m02: state=Stopped err=<nil>
	W0919 22:43:48.079401  306754 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:43:48.081024  306754 out.go:252] * Restarting existing docker container for "ha-434755-m02" ...
	I0919 22:43:48.081116  306754 cli_runner.go:164] Run: docker start ha-434755-m02
	I0919 22:43:48.376097  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:43:48.396423  306754 kic.go:430] container "ha-434755-m02" state is running.
	I0919 22:43:48.396785  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:48.415958  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:43:48.416258  306754 machine.go:93] provisionDockerMachine start ...
	I0919 22:43:48.416329  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:48.434784  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:48.435113  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:48.435136  306754 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:43:48.435824  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56880->127.0.0.1:32843: read: connection reset by peer
	I0919 22:43:51.607226  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:43:51.607258  306754 ubuntu.go:182] provisioning hostname "ha-434755-m02"
	I0919 22:43:51.607316  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:51.633598  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:51.633884  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:51.633938  306754 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m02 && echo "ha-434755-m02" | sudo tee /etc/hostname
	I0919 22:43:51.805490  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-434755-m02
	
	I0919 22:43:51.805587  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:51.826450  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:51.826760  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:51.826788  306754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:43:51.970064  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:51.970101  306754 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:43:51.970124  306754 ubuntu.go:190] setting up certificates
	I0919 22:43:51.970136  306754 provision.go:84] configureAuth start
	I0919 22:43:51.970188  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:51.988229  306754 provision.go:143] copyHostCerts
	I0919 22:43:51.988268  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:51.988319  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:43:51.988330  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:43:51.988413  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:43:51.988530  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:51.988559  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:43:51.988569  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:43:51.988615  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:43:51.988682  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:51.988707  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:43:51.988716  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:43:51.988751  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:43:51.988819  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m02 san=[127.0.0.1 192.168.49.3 ha-434755-m02 localhost minikube]
	I0919 22:43:52.050577  306754 provision.go:177] copyRemoteCerts
	I0919 22:43:52.050643  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:43:52.050694  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.067930  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:52.167387  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:43:52.167461  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:43:52.195400  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:43:52.195494  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:43:52.243020  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:43:52.243105  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:43:52.268296  306754 provision.go:87] duration metric: took 298.143794ms to configureAuth
	I0919 22:43:52.268326  306754 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:43:52.268617  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:52.268672  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.290487  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:52.290785  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:52.290806  306754 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 22:43:52.436575  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 22:43:52.436601  306754 ubuntu.go:71] root file system type: overlay
	I0919 22:43:52.436754  306754 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 22:43:52.436845  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.458543  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:52.458862  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:52.458970  306754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 22:43:52.619127  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 22:43:52.619226  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.644964  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:43:52.645263  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:43:52.645292  306754 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 22:43:52.829696  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:43:52.829732  306754 machine.go:96] duration metric: took 4.413457378s to provisionDockerMachine
	I0919 22:43:52.829747  306754 start.go:293] postStartSetup for "ha-434755-m02" (driver="docker")
	I0919 22:43:52.829761  306754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:43:52.829855  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:43:52.829911  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:52.856939  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:52.971256  306754 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:43:52.978974  306754 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:43:52.979019  306754 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:43:52.979032  306754 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:43:52.979041  306754 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:43:52.979055  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 22:43:52.979117  306754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 22:43:52.979236  306754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 22:43:52.979256  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /etc/ssl/certs/1463352.pem
	I0919 22:43:52.979456  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:43:52.995447  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:43:53.038308  306754 start.go:296] duration metric: took 208.542001ms for postStartSetup
	I0919 22:43:53.038394  306754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:43:53.038452  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:53.064431  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:53.171228  306754 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:43:53.177085  306754 fix.go:56] duration metric: took 5.118486555s for fixHost
	I0919 22:43:53.177114  306754 start.go:83] releasing machines lock for "ha-434755-m02", held for 5.118539892s
	I0919 22:43:53.177184  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m02
	I0919 22:43:53.204531  306754 out.go:179] * Found network options:
	I0919 22:43:53.205707  306754 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:43:53.206847  306754 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:43:53.206895  306754 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:43:53.207000  306754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:43:53.207055  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:53.207652  306754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:43:53.207806  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m02
	I0919 22:43:53.235982  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:53.236550  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m02/id_rsa Username:docker}
	I0919 22:43:53.441884  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 22:43:53.469344  306754 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:43:53.469423  306754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:43:53.482231  306754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:43:53.482266  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:53.482302  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:53.482432  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:53.505978  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 22:43:53.519731  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 22:43:53.533562  306754 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 22:43:53.533642  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 22:43:53.547659  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:53.562526  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 22:43:53.576145  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 22:43:53.589986  306754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:43:53.600894  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 22:43:53.613432  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 22:43:53.626414  306754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 22:43:53.637253  306754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:43:53.648221  306754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:43:53.661018  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:53.822661  306754 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 22:43:54.019194  306754 start.go:495] detecting cgroup driver to use...
	I0919 22:43:54.019260  306754 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:43:54.019325  306754 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 22:43:54.032655  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:54.044990  306754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:43:54.063162  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:43:54.074199  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 22:43:54.085305  306754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:43:54.102241  306754 ssh_runner.go:195] Run: which cri-dockerd
	I0919 22:43:54.105666  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 22:43:54.114090  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 22:43:54.132600  306754 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 22:43:54.260538  306754 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 22:43:54.391535  306754 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 22:43:54.391578  306754 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 22:43:54.413001  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 22:43:54.424344  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:43:54.544952  306754 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 22:44:20.727235  306754 ssh_runner.go:235] Completed: sudo systemctl restart docker: (26.182243315s)
	I0919 22:44:20.727357  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:44:20.757386  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 22:44:20.778539  306754 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 22:44:20.809547  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:44:20.829218  306754 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 22:44:20.990462  306754 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 22:44:21.122804  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:21.270361  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 22:44:21.303663  306754 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 22:44:21.327719  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:21.470493  306754 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 22:44:21.607780  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 22:44:21.630475  306754 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 22:44:21.630569  306754 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 22:44:21.636470  306754 start.go:563] Will wait 60s for crictl version
	I0919 22:44:21.636546  306754 ssh_runner.go:195] Run: which crictl
	I0919 22:44:21.642013  306754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:44:21.708621  306754 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 22:44:21.708700  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:44:21.745948  306754 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 22:44:21.791651  306754 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 22:44:21.792927  306754 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:44:21.794235  306754 cli_runner.go:164] Run: docker network inspect ha-434755 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:44:21.824158  306754 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:44:21.830914  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:44:21.849027  306754 mustload.go:65] Loading cluster: ha-434755
	I0919 22:44:21.849434  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:44:21.850149  306754 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:44:21.882961  306754 host.go:66] Checking if "ha-434755" exists ...
	I0919 22:44:21.883657  306754 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755 for IP: 192.168.49.3
	I0919 22:44:21.883744  306754 certs.go:194] generating shared ca certs ...
	I0919 22:44:21.883768  306754 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:44:21.884113  306754 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 22:44:21.884203  306754 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 22:44:21.884215  306754 certs.go:256] generating profile certs ...
	I0919 22:44:21.884312  306754 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key
	I0919 22:44:21.884376  306754 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key.be912a57
	I0919 22:44:21.884420  306754 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key
	I0919 22:44:21.884432  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:44:21.884449  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:44:21.884461  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:44:21.884474  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:44:21.884487  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:44:21.884533  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:44:21.884551  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:44:21.884564  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:44:21.884619  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 22:44:21.884655  306754 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 22:44:21.884665  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:44:21.884696  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:44:21.884724  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:44:21.884751  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 22:44:21.884806  306754 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 22:44:21.884844  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> /usr/share/ca-certificates/1463352.pem
	I0919 22:44:21.884861  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:21.884877  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem -> /usr/share/ca-certificates/146335.pem
	I0919 22:44:21.884941  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755
	I0919 22:44:21.919935  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755/id_rsa Username:docker}
	I0919 22:44:22.033094  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:44:22.044103  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:44:22.087129  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:44:22.100830  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0919 22:44:22.140176  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:44:22.151511  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:44:22.180512  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:44:22.191050  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:44:22.232218  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:44:22.246764  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:44:22.284941  306754 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:44:22.293161  306754 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:44:22.330676  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:44:22.408749  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 22:44:22.470668  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:44:22.530262  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:44:22.590668  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:44:22.649927  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:44:22.745218  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:44:22.799341  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:44:22.854479  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 22:44:22.916815  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:44:22.986618  306754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 22:44:23.066105  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:44:23.128853  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0919 22:44:23.196912  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:44:23.238915  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:44:23.303660  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:44:23.346103  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:44:23.386289  306754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:44:23.418560  306754 ssh_runner.go:195] Run: openssl version
	I0919 22:44:23.428674  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 22:44:23.449549  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 22:44:23.458143  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 22:44:23.458211  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 22:44:23.469982  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 22:44:23.484616  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 22:44:23.499402  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 22:44:23.506645  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 22:44:23.506733  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 22:44:23.517121  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:44:23.533092  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:44:23.549662  306754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:23.557627  306754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:23.557678  306754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:44:23.567422  306754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:44:23.580227  306754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:44:23.585484  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:44:23.595352  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:44:23.604710  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:44:23.613959  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:44:23.623804  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:44:23.635631  306754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:44:23.645350  306754 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0919 22:44:23.645589  306754 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-434755-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-434755 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:44:23.645640  306754 kube-vip.go:115] generating kube-vip config ...
	I0919 22:44:23.645686  306754 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:44:23.663723  306754 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:44:23.663787  306754 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:44:23.663834  306754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:44:23.677238  306754 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:44:23.677352  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:44:23.689915  306754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 22:44:23.718367  306754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:44:23.744992  306754 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:44:23.775604  306754 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:44:23.782963  306754 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:44:23.801416  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:24.017559  306754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:44:24.043227  306754 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 22:44:24.043612  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:44:24.046279  306754 out.go:179] * Verifying Kubernetes components...
	I0919 22:44:24.047278  306754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:44:24.245342  306754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:44:24.270301  306754 kapi.go:59] client config for ha-434755: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/client.key", CAFile:"/home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:44:24.270404  306754 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:44:24.270772  306754 node_ready.go:35] waiting up to 6m0s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:44:31.544458  306754 node_ready.go:49] node "ha-434755-m02" is "Ready"
	I0919 22:44:31.544524  306754 node_ready.go:38] duration metric: took 7.273702746s for node "ha-434755-m02" to be "Ready" ...
	I0919 22:44:31.544551  306754 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:44:31.544614  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:32.044840  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:32.544670  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:33.044936  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:33.545700  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:34.044733  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:34.545290  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:35.045175  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:35.545700  306754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:35.558902  306754 api_server.go:72] duration metric: took 11.515208288s to wait for apiserver process to appear ...
	I0919 22:44:35.558925  306754 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:44:35.558943  306754 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:44:35.564007  306754 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:44:35.564963  306754 api_server.go:141] control plane version: v1.34.0
	I0919 22:44:35.564986  306754 api_server.go:131] duration metric: took 6.054881ms to wait for apiserver health ...
	I0919 22:44:35.564996  306754 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:44:35.569458  306754 system_pods.go:59] 17 kube-system pods found
	I0919 22:44:35.569484  306754 system_pods.go:61] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:44:35.569492  306754 system_pods.go:61] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:44:35.569514  306754 system_pods.go:61] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:44:35.569520  306754 system_pods.go:61] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:44:35.569525  306754 system_pods.go:61] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:44:35.569529  306754 system_pods.go:61] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:44:35.569534  306754 system_pods.go:61] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:44:35.569540  306754 system_pods.go:61] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:44:35.569550  306754 system_pods.go:61] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:44:35.569564  306754 system_pods.go:61] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:44:35.569569  306754 system_pods.go:61] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:44:35.569576  306754 system_pods.go:61] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:44:35.569581  306754 system_pods.go:61] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:44:35.569586  306754 system_pods.go:61] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:44:35.569596  306754 system_pods.go:61] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0919 22:44:35.569602  306754 system_pods.go:61] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:44:35.569609  306754 system_pods.go:61] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:44:35.569650  306754 system_pods.go:74] duration metric: took 4.64653ms to wait for pod list to return data ...
	I0919 22:44:35.569660  306754 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:44:35.572028  306754 default_sa.go:45] found service account: "default"
	I0919 22:44:35.572046  306754 default_sa.go:55] duration metric: took 2.375873ms for default service account to be created ...
	I0919 22:44:35.572055  306754 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:44:35.575284  306754 system_pods.go:86] 17 kube-system pods found
	I0919 22:44:35.575302  306754 system_pods.go:89] "coredns-66bc5c9577-4lmln" [0f31e1cc-6bbb-4987-93c7-48e61288b609] Running
	I0919 22:44:35.575307  306754 system_pods.go:89] "coredns-66bc5c9577-w8trg" [54431fee-554c-4c3c-9c81-d779981d36db] Running
	I0919 22:44:35.575311  306754 system_pods.go:89] "etcd-ha-434755" [efa4db41-3739-45d6-ada5-d66dd5b82f46] Running
	I0919 22:44:35.575314  306754 system_pods.go:89] "etcd-ha-434755-m02" [c47d7da8-6337-4062-a7d1-707ebc8f4df5] Running
	I0919 22:44:35.575318  306754 system_pods.go:89] "kindnet-74q9s" [06bab6e9-ad22-4651-947e-723307c31d04] Running
	I0919 22:44:35.575321  306754 system_pods.go:89] "kindnet-djvx4" [dd2c97ac-215c-4657-a3af-bf74603285af] Running
	I0919 22:44:35.575324  306754 system_pods.go:89] "kube-apiserver-ha-434755" [fdcd2f64-6b9f-40ed-be07-24beef072bca] Running
	I0919 22:44:35.575327  306754 system_pods.go:89] "kube-apiserver-ha-434755-m02" [bcc4bd8e-7086-4034-94f8-865e02212e7b] Running
	I0919 22:44:35.575331  306754 system_pods.go:89] "kube-controller-manager-ha-434755" [66066c78-f094-492d-9c71-a683cccd45a0] Running
	I0919 22:44:35.575338  306754 system_pods.go:89] "kube-controller-manager-ha-434755-m02" [290b348b-6c1a-4891-990b-c943066ab212] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:44:35.575343  306754 system_pods.go:89] "kube-proxy-4cnsm" [a477a521-e24b-449d-854f-c873cb517164] Running
	I0919 22:44:35.575347  306754 system_pods.go:89] "kube-proxy-gzpg8" [9d9843d9-c2ca-4751-8af5-f8fc91cf07c9] Running
	I0919 22:44:35.575350  306754 system_pods.go:89] "kube-scheduler-ha-434755" [593d9f5b-40f3-47b7-aef2-b25348983754] Running
	I0919 22:44:35.575354  306754 system_pods.go:89] "kube-scheduler-ha-434755-m02" [34109527-5e07-415c-9bfc-d500d75092ca] Running
	I0919 22:44:35.575358  306754 system_pods.go:89] "kube-vip-ha-434755" [a8de26f0-2b4f-417b-9896-217d4177060b] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0919 22:44:35.575362  306754 system_pods.go:89] "kube-vip-ha-434755-m02" [30071515-3665-4872-a66b-3d8ddccb0cae] Running
	I0919 22:44:35.575367  306754 system_pods.go:89] "storage-provisioner" [fb950ab4-a515-4298-b7f0-e01d6290af75] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:44:35.575373  306754 system_pods.go:126] duration metric: took 3.312161ms to wait for k8s-apps to be running ...
	I0919 22:44:35.575382  306754 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:44:35.575419  306754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:44:35.587035  306754 system_svc.go:56] duration metric: took 11.645688ms WaitForService to wait for kubelet
	I0919 22:44:35.587057  306754 kubeadm.go:578] duration metric: took 11.543372799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:44:35.587077  306754 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:44:35.592372  306754 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:44:35.592397  306754 node_conditions.go:123] node cpu capacity is 8
	I0919 22:44:35.592411  306754 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:44:35.592417  306754 node_conditions.go:123] node cpu capacity is 8
	I0919 22:44:35.592423  306754 node_conditions.go:105] duration metric: took 5.340807ms to run NodePressure ...
	I0919 22:44:35.592437  306754 start.go:241] waiting for startup goroutines ...
	I0919 22:44:35.592469  306754 start.go:255] writing updated cluster config ...
	I0919 22:44:35.593840  306754 out.go:203] 
	I0919 22:44:35.595103  306754 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:44:35.595225  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:44:35.596740  306754 out.go:179] * Starting "ha-434755-m04" worker node in "ha-434755" cluster
	I0919 22:44:35.597955  306754 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:44:35.598928  306754 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:44:35.599836  306754 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:44:35.599853  306754 cache.go:58] Caching tarball of preloaded images
	I0919 22:44:35.599867  306754 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:44:35.599952  306754 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 22:44:35.599968  306754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 22:44:35.600070  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:44:35.618954  306754 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:44:35.618971  306754 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:44:35.618985  306754 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:44:35.619006  306754 start.go:360] acquireMachinesLock for ha-434755-m04: {Name:mkcb1ae14090fd5c105c7696f226eb54b7426db9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:44:35.619056  306754 start.go:364] duration metric: took 34.434µs to acquireMachinesLock for "ha-434755-m04"
	I0919 22:44:35.619073  306754 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:44:35.619078  306754 fix.go:54] fixHost starting: m04
	I0919 22:44:35.619277  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:44:35.635488  306754 fix.go:112] recreateIfNeeded on ha-434755-m04: state=Stopped err=<nil>
	W0919 22:44:35.635522  306754 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:44:35.636937  306754 out.go:252] * Restarting existing docker container for "ha-434755-m04" ...
	I0919 22:44:35.636998  306754 cli_runner.go:164] Run: docker start ha-434755-m04
	I0919 22:44:35.880863  306754 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:44:35.900207  306754 kic.go:430] container "ha-434755-m04" state is running.
	I0919 22:44:35.900738  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:44:35.920815  306754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/ha-434755/config.json ...
	I0919 22:44:35.921050  306754 machine.go:93] provisionDockerMachine start ...
	I0919 22:44:35.921112  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:44:35.939494  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:44:35.939855  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:44:35.939875  306754 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:44:35.940477  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47578->127.0.0.1:32848: read: connection reset by peer
	I0919 22:44:38.976845  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:42.013674  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:45.050648  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:48.087177  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:51.123806  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:54.159311  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:44:57.196664  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:00.231780  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:03.268309  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:06.304115  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:09.339820  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:12.377025  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:15.413106  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:18.449198  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:21.486347  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:24.523246  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:27.559102  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:30.595066  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:33.632178  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:36.668095  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:39.705080  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:42.741620  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:45.778094  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:48.814216  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:51.850659  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:54.888764  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:45:57.926773  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:00.962612  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:04.000597  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:07.037610  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:10.073879  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:13.110354  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:16.147874  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:19.184615  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:22.220478  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:25.255344  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:28.291736  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:31.329368  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:34.365263  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:37.401216  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:40.436801  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:43.474274  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:46.511002  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:49.548640  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:52.587262  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:55.623128  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:58.659480  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:01.696650  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:04.731946  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:07.768095  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:10.804300  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:13.840657  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:16.878024  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:19.912838  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:22.950049  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:25.985035  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:29.020804  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:32.057784  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:35.095114  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:38.095793  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:47:38.095828  306754 ubuntu.go:182] provisioning hostname "ha-434755-m04"
	I0919 22:47:38.095896  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:47:38.114241  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:47:38.114586  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:47:38.114610  306754 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-434755-m04 && echo "ha-434755-m04" | sudo tee /etc/hostname
	I0919 22:47:38.149901  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:41.186255  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:44.223737  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:47.260806  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:50.296562  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:53.335133  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:56.371717  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:59.406991  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:02.443645  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:05.479626  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:08.514740  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:11.552150  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:14.588794  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:17.625824  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:20.661951  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:23.698677  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:26.736808  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:29.772266  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:32.808846  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:35.844845  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:38.880247  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:41.916844  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:44.951964  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:47.987158  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:51.023891  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:54.060750  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:57.098459  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:00.133430  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:03.169755  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:06.205767  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:09.241916  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:12.279154  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:15.314739  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:18.354078  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:21.391146  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:24.426978  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:27.464438  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:30.500003  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:33.536668  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:36.573788  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:39.609153  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:42.644505  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:45.679846  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:48.714985  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:51.753114  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:54.789673  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:57.829152  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:00.866647  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:03.903813  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:06.940767  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:09.977770  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:13.014880  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:16.052297  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:19.088322  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:22.126414  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:25.162071  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:28.198477  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:31.234533  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:34.271823  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:37.308353  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:40.308595  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:50:40.308717  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:50:40.328327  306754 main.go:141] libmachine: Using SSH client type: native
	I0919 22:50:40.328634  306754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:50:40.328654  306754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-434755-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-434755-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-434755-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:50:40.364113  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:43.401607  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:46.438588  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:49.474372  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:52.510149  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:55.545433  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:58.582376  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:01.618889  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:04.654718  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:07.689743  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:10.726438  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:13.763371  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:16.799701  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:19.836415  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:22.875036  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:25.910558  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:28.946749  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:31.983660  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:35.019740  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:38.057188  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:41.093531  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:44.130632  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:47.167719  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:50.204000  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:53.242098  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:56.278177  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:59.315114  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:02.351376  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:05.387418  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:08.424418  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:11.461805  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:14.496890  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:17.533764  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:20.569792  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:23.606298  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:26.642016  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:29.679917  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:32.716729  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:35.751860  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:38.788063  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:41.824681  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:44.860632  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:47.896783  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:50.933686  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:53.970455  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:57.007607  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:00.043781  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:03.080464  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:06.116459  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:09.153136  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:12.190750  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:15.226325  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:18.262179  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:21.298840  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:24.334155  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:27.371283  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:30.406705  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:33.443174  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:36.480706  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:39.515984  306754 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:42.518160  306754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:53:42.518213  306754 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 22:53:42.518253  306754 ubuntu.go:190] setting up certificates
	I0919 22:53:42.518270  306754 provision.go:84] configureAuth start
	I0919 22:53:42.518345  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:42.536471  306754 provision.go:143] copyHostCerts
	I0919 22:53:42.536541  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:42.536587  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:42.536600  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:42.536699  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:42.536849  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:42.536874  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:42.536881  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:42.536910  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:42.536960  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:42.536976  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:42.536982  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:42.537005  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:42.537075  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:42.931587  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:42.931644  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:42.931681  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:42.949394  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:42.984311  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:42.984346  306754 retry.go:31] will retry after 327.821016ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:43.347560  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:43.347587  306754 retry.go:31] will retry after 243.46549ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:43.627078  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:43.627104  306754 retry.go:31] will retry after 664.059911ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:44.327907  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.328017  306754 retry.go:31] will retry after 359.803869ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.688672  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:44.706219  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:44.741632  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.741661  306754 retry.go:31] will retry after 220.247897ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:44.996988  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:44.997035  306754 retry.go:31] will retry after 419.776326ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:45.452683  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:45.452712  306754 retry.go:31] will retry after 552.672736ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:46.041337  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.041381  306754 retry.go:31] will retry after 500.704026ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:46.578470  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.578589  306754 provision.go:87] duration metric: took 4.060308089s to configureAuth
	W0919 22:53:46.578605  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.578628  306754 retry.go:31] will retry after 84.832µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.579768  306754 provision.go:84] configureAuth start
	I0919 22:53:46.579839  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:46.596992  306754 provision.go:143] copyHostCerts
	I0919 22:53:46.597027  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:46.597061  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:46.597072  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:46.597124  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:46.597253  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:46.597282  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:46.597289  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:46.597314  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:46.597367  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:46.597384  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:46.597389  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:46.597408  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:46.597479  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:46.734391  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:46.734445  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:46.734480  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:46.751738  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:46.786763  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.786799  306754 retry.go:31] will retry after 343.684216ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:47.166247  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:47.166274  306754 retry.go:31] will retry after 217.133577ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:47.420746  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:47.420780  306754 retry.go:31] will retry after 498.567333ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:47.955439  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:47.955479  306754 retry.go:31] will retry after 494.414185ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:48.486082  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.486169  306754 retry.go:31] will retry after 171.267823ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.658623  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:48.675595  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:48.710840  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.710867  306754 retry.go:31] will retry after 201.247835ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:48.946825  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:48.946857  306754 retry.go:31] will retry after 359.387077ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:49.341697  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:49.341725  306754 retry.go:31] will retry after 422.852532ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:49.800193  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:49.800226  306754 retry.go:31] will retry after 732.23205ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:50.569169  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.569273  306754 provision.go:87] duration metric: took 3.98948408s to configureAuth
	W0919 22:53:50.569284  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.569299  306754 retry.go:31] will retry after 150.475µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.570391  306754 provision.go:84] configureAuth start
	I0919 22:53:50.570482  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:50.589453  306754 provision.go:143] copyHostCerts
	I0919 22:53:50.589488  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:50.589595  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:50.589615  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:50.589694  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:50.589786  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:50.589811  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:50.589820  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:50.589854  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:50.589919  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:50.589945  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:50.589951  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:50.589983  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:50.590079  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:50.723808  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:50.723874  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:50.723919  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:50.741265  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:50.776681  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:50.776711  306754 retry.go:31] will retry after 242.012835ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:51.054160  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:51.054189  306754 retry.go:31] will retry after 469.918328ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:51.560111  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:51.560142  306754 retry.go:31] will retry after 806.884367ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:52.403950  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.404042  306754 retry.go:31] will retry after 174.387519ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.579469  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:52.598080  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:52.634064  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.634093  306754 retry.go:31] will retry after 145.829901ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:52.815524  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.815556  306754 retry.go:31] will retry after 498.800271ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:53.351527  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:53.351560  306754 retry.go:31] will retry after 373.407394ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:53.760023  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:53.760058  306754 retry.go:31] will retry after 694.32313ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:54.489838  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.489939  306754 provision.go:87] duration metric: took 3.919518578s to configureAuth
	W0919 22:53:54.489953  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.489980  306754 retry.go:31] will retry after 156.391µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.491170  306754 provision.go:84] configureAuth start
	I0919 22:53:54.491235  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:54.507800  306754 provision.go:143] copyHostCerts
	I0919 22:53:54.507832  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:54.507856  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:54.507865  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:54.507917  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:54.507999  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:54.508016  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:54.508025  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:54.508046  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:54.508134  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:54.508160  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:54.508172  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:54.508194  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:54.508255  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:54.702308  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:54.702363  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:54.702402  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:54.719508  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:54.754479  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:54.754532  306754 retry.go:31] will retry after 262.57616ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:55.054473  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:55.054534  306754 retry.go:31] will retry after 410.205034ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:55.499921  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:55.499953  306754 retry.go:31] will retry after 516.948693ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:56.052821  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.052920  306754 retry.go:31] will retry after 287.471529ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.341489  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:56.359419  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:56.395053  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.395085  306754 retry.go:31] will retry after 362.750816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:56.793926  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:56.793959  306754 retry.go:31] will retry after 405.598886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:57.235521  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:57.235550  306754 retry.go:31] will retry after 354.631954ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:57.627139  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:57.627176  306754 retry.go:31] will retry after 562.91369ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:58.226126  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.226210  306754 provision.go:87] duration metric: took 3.735019016s to configureAuth
	W0919 22:53:58.226219  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.226245  306754 retry.go:31] will retry after 277.766µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.227384  306754 provision.go:84] configureAuth start
	I0919 22:53:58.227448  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:53:58.244327  306754 provision.go:143] copyHostCerts
	I0919 22:53:58.244360  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:58.244387  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:53:58.244399  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:53:58.244460  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:53:58.244571  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:58.244592  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:53:58.244596  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:53:58.244620  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:53:58.244684  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:58.244701  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:53:58.244707  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:53:58.244726  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:53:58.244820  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:53:58.526249  306754 provision.go:177] copyRemoteCerts
	I0919 22:53:58.526305  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:53:58.526339  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:53:58.544162  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:53:58.580810  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.580834  306754 retry.go:31] will retry after 244.293404ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:58.861398  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.861432  306754 retry.go:31] will retry after 274.454092ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:59.172246  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:59.172275  306754 retry.go:31] will retry after 475.218135ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:53:59.682695  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:59.682786  306754 retry.go:31] will retry after 366.451516ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.050408  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:00.068885  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:00.104639  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.104667  306754 retry.go:31] will retry after 245.587287ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:00.386000  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.386029  306754 retry.go:31] will retry after 347.162049ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:00.768436  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:00.768468  306754 retry.go:31] will retry after 475.508039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:01.279090  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.279200  306754 provision.go:87] duration metric: took 3.05179768s to configureAuth
	W0919 22:54:01.279212  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.279227  306754 retry.go:31] will retry after 673.05µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.280405  306754 provision.go:84] configureAuth start
	I0919 22:54:01.280490  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:01.298157  306754 provision.go:143] copyHostCerts
	I0919 22:54:01.298201  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:01.298247  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:01.298259  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:01.298342  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:01.298442  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:01.298476  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:01.298487  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:01.298552  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:01.298643  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:01.298669  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:01.298679  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:01.298710  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:01.298801  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:01.568200  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:01.568271  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:01.568319  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:01.586091  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:01.621653  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.621687  306754 retry.go:31] will retry after 250.678085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:01.908948  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.908990  306754 retry.go:31] will retry after 380.583231ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:02.325550  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:02.325585  306754 retry.go:31] will retry after 757.589746ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:03.118940  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.119032  306754 retry.go:31] will retry after 297.891821ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.417585  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:03.435527  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:03.470577  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.470608  306754 retry.go:31] will retry after 135.697801ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:03.641710  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:03.641743  306754 retry.go:31] will retry after 339.0934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:04.015950  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:04.015984  306754 retry.go:31] will retry after 772.616366ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:04.824951  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:04.824980  306754 retry.go:31] will retry after 516.227388ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:05.376717  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.376824  306754 provision.go:87] duration metric: took 4.096399764s to configureAuth
	W0919 22:54:05.376836  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.376847  306754 retry.go:31] will retry after 386.581µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.378139  306754 provision.go:84] configureAuth start
	I0919 22:54:05.378216  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:05.395262  306754 provision.go:143] copyHostCerts
	I0919 22:54:05.395294  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:05.395318  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:05.395326  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:05.395380  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:05.395528  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:05.395554  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:05.395562  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:05.395588  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:05.395653  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:05.395671  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:05.395674  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:05.395694  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:05.395786  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:05.584739  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:05.584799  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:05.584847  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:05.602411  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:05.637553  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.637578  306754 retry.go:31] will retry after 208.291934ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:05.881825  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:05.881858  306754 retry.go:31] will retry after 455.61088ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:06.374930  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:06.374964  306754 retry.go:31] will retry after 825.914647ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:07.236166  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.236241  306754 retry.go:31] will retry after 251.800701ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.488767  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:07.506531  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:07.542053  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.542086  306754 retry.go:31] will retry after 217.319386ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:07.795257  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.795290  306754 retry.go:31] will retry after 208.063886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:08.039248  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.039283  306754 retry.go:31] will retry after 651.900068ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:08.727030  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.727113  306754 provision.go:87] duration metric: took 3.348957352s to configureAuth
	W0919 22:54:08.727125  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.727139  306754 retry.go:31] will retry after 1.333904ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.729326  306754 provision.go:84] configureAuth start
	I0919 22:54:08.729395  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:08.746263  306754 provision.go:143] copyHostCerts
	I0919 22:54:08.746299  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:08.746330  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:08.746341  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:08.746408  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:08.746536  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:08.746561  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:08.746569  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:08.746594  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:08.746665  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:08.746682  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:08.746688  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:08.746708  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:08.746771  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:08.899961  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:08.900036  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:08.900088  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:08.916656  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:08.952077  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:08.952107  306754 retry.go:31] will retry after 333.635936ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:09.322368  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:09.322397  306754 retry.go:31] will retry after 351.188839ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:09.709321  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:09.709351  306754 retry.go:31] will retry after 424.380279ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:10.169679  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.169706  306754 retry.go:31] will retry after 622.981079ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:10.828443  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.828560  306754 provision.go:87] duration metric: took 2.09922013s to configureAuth
	W0919 22:54:10.828575  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.828586  306754 retry.go:31] will retry after 1.922293ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.830780  306754 provision.go:84] configureAuth start
	I0919 22:54:10.830861  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:10.849570  306754 provision.go:143] copyHostCerts
	I0919 22:54:10.849610  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:10.849637  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:10.849647  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:10.849698  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:10.849783  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:10.849806  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:10.849812  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:10.849876  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:10.849946  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:10.849963  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:10.849969  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:10.849989  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:10.850059  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:11.073047  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:11.073102  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:11.073135  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:11.090381  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:11.126669  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:11.126695  306754 retry.go:31] will retry after 314.361348ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:11.477730  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:11.477758  306754 retry.go:31] will retry after 260.511886ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:11.774311  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:11.774338  306754 retry.go:31] will retry after 432.523136ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:12.242876  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.242903  306754 retry.go:31] will retry after 624.693112ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:12.904153  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.904249  306754 provision.go:87] duration metric: took 2.073448479s to configureAuth
	W0919 22:54:12.904264  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.904278  306754 retry.go:31] will retry after 1.348392ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:12.906475  306754 provision.go:84] configureAuth start
	I0919 22:54:12.906566  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:12.923064  306754 provision.go:143] copyHostCerts
	I0919 22:54:12.923095  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:12.923120  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:12.923125  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:12.923171  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:12.923262  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:12.923284  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:12.923288  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:12.923309  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:12.923365  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:12.923382  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:12.923385  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:12.923403  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:12.923470  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:13.039711  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:13.039763  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:13.039805  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:13.056853  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:13.092736  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.092761  306754 retry.go:31] will retry after 176.485068ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:13.305354  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.305378  306754 retry.go:31] will retry after 493.048592ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:13.833852  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.833879  306754 retry.go:31] will retry after 577.272179ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:14.446849  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:14.446934  306754 retry.go:31] will retry after 370.926084ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:14.818553  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:14.836147  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:14.871457  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:14.871483  306754 retry.go:31] will retry after 208.784174ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:15.116890  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:15.116922  306754 retry.go:31] will retry after 431.415105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:15.584759  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:15.584793  306754 retry.go:31] will retry after 369.293791ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:15.989470  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:15.989526  306754 retry.go:31] will retry after 747.230625ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:16.771900  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.772010  306754 provision.go:87] duration metric: took 3.865514416s to configureAuth
	W0919 22:54:16.772022  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.772038  306754 retry.go:31] will retry after 5.016981ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.777965  306754 provision.go:84] configureAuth start
	I0919 22:54:16.778044  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:16.796080  306754 provision.go:143] copyHostCerts
	I0919 22:54:16.796115  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:16.796150  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:16.796160  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:16.796216  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:16.796282  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:16.796300  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:16.796306  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:16.796327  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:16.796366  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:16.796383  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:16.796389  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:16.796407  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:16.796452  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:16.908698  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:16.908757  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:16.908790  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:16.925506  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	W0919 22:54:16.960836  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.960866  306754 retry.go:31] will retry after 214.400755ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:17.211378  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:17.211405  306754 retry.go:31] will retry after 230.919633ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:17.477471  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:17.477521  306754 retry.go:31] will retry after 339.325482ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:17.851812  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:17.851846  306754 retry.go:31] will retry after 899.158848ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:54:18.786166  306754 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:18.786283  306754 provision.go:87] duration metric: took 2.008295325s to configureAuth
	W0919 22:54:18.786296  306754 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:18.786312  306754 retry.go:31] will retry after 5.67967ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:18.792526  306754 provision.go:84] configureAuth start
	I0919 22:54:18.792605  306754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-434755-m04
	I0919 22:54:18.810194  306754 provision.go:143] copyHostCerts
	I0919 22:54:18.810225  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:18.810251  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 22:54:18.810259  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 22:54:18.810312  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 22:54:18.810403  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:18.810421  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 22:54:18.810424  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 22:54:18.810448  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 22:54:18.810523  306754 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:18.810550  306754 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 22:54:18.810554  306754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 22:54:18.810577  306754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 22:54:18.810646  306754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.ha-434755-m04 san=[127.0.0.1 192.168.49.5 ha-434755-m04 localhost minikube]
	I0919 22:54:19.258474  306754 provision.go:177] copyRemoteCerts
	I0919 22:54:19.258556  306754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:54:19.258602  306754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-434755-m04
	I0919 22:54:19.276208  306754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/ha-434755-m04/id_rsa Username:docker}
	
	
	==> Docker <==
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0571a9b22aa8dba90ce65f75de015c275de4f02c9b11d07445117722c8bd5410\""
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"62cd9dd3b99a779d6b1ffe72046bafeef3d781c016335de5886ea2ca70bf69d4\""
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"16320e14d7e184563d15b2804dbf3e9612c480a8dcb1c6db031a96760d11777b\""
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b69dcaba1fe3e6996e4b1abe588d8ed828c8e1b07e61838a54d5c6eea3a368de\""
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"aae975e95bddb1ee1e82f7b41e51834b3ec8b8b95305a8638cb4f4c2420550b2\". Proceed without further sandbox information."
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"ba9ef91c2ce687f19a5a22c8332fe6dbaf2a8d254d42799e4572498ae880b17d\". Proceed without further sandbox information."
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"88eef40585d591e587fd71a3cd77a3900e0dd4b8c8cfac671dfc1bd6b26e6051\". Proceed without further sandbox information."
	Sep 19 22:43:47 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:47Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"1e4f3e71f1dc3d43753f28e1228a896c27fe1a7f50d0e53c0acd52e395830d70\". Proceed without further sandbox information."
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a34b5228df03af9c0decc0ae3bf336c4e56809a32cfc3ad3dc4b6478229539fa/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5313d39caef3bd3c6eed7b8e4df8f37af4cf8238de0353e60f421e0a34fb644c/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/78f748319a21337b7dd735d7e557385423a7797d90985e87a90c1da19bcddfdc/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b183ab1a64f4f2a47e177abae6698770831b6605015d38f8e00f790e556f0dec/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ce17f68634ce0749242d9ad56c7e46fc793ec11cb77c007779361848a2bd99a6/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-4lmln_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"0571a9b22aa8dba90ce65f75de015c275de4f02c9b11d07445117722c8bd5410\""
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-w8trg_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"16320e14d7e184563d15b2804dbf3e9612c480a8dcb1c6db031a96760d11777b\""
	Sep 19 22:43:48 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7b57f96db7-v7khr_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8d662a6a0cce0d2a16826cebfb1f342627aa7c367df671adf5932fdf952bcb33\""
	Sep 19 22:43:51 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 22:43:52 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/54a3605feacc308ef6fadfa2f024b8c4a41ed4a04f28fb4cb597399205a667c6/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:52 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/50bc6f8ebaef6fcf2a0003397258981e152e6bdebccdda0e3a7fc9b68b4b8fc8/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:52 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45f4d01a918b44ce11de8c67c9210f420962fd2779889322f36ccd9666a926e1/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:52 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/69d27695c5d2f00d033ff375ffa0b6ef888f6f2b98cd521dd67093c07e46c6bb/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:43:53 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9d47048d9bc95b73b02cdec3f76b27bc2b71befa870c0e179d839d9a7725509d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 19 22:43:53 ha-434755 cri-dockerd[1165]: time="2025-09-19T22:43:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2db89d0c387ddd89ae5537583aa93fa6e69ffaeff0bd974e0940a80e6a2ea72b/resolv.conf as [nameserver 192.168.49.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 22:44:23 ha-434755 dockerd[815]: time="2025-09-19T22:44:23.178868768Z" level=info msg="ignoring event" container=13188f4cc8c028604aded5da210e7d1cb2159bdabdd1cee52c0cc087984aa6fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 22:44:23 ha-434755 dockerd[815]: time="2025-09-19T22:44:23.382943633Z" level=info msg="ignoring event" container=a5df53fd919b38bdf18602b46cfd3f26ac5c4087d0bb51c9b1d15e160bc77025 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	393bc7c0291ba       6e38f40d628db       9 minutes ago       Running             storage-provisioner       5                   54a3605feacc3       storage-provisioner
	8a0d7d45f8189       765655ea60781       9 minutes ago       Running             kube-vip                  3                   ce17f68634ce0       kube-vip-ha-434755
	cf132ef265d8d       409467f978b4a       10 minutes ago      Running             kindnet-cni               2                   2db89d0c387dd       kindnet-djvx4
	8d105e7a52372       8c811b4aec35f       10 minutes ago      Running             busybox                   2                   9d47048d9bc95       busybox-7b57f96db7-v7khr
	6274013f16f07       52546a367cc9e       10 minutes ago      Running             coredns                   4                   45f4d01a918b4       coredns-66bc5c9577-w8trg
	06b1353b1636d       52546a367cc9e       10 minutes ago      Running             coredns                   4                   69d27695c5d2f       coredns-66bc5c9577-4lmln
	4ade0812e45d6       df0860106674d       10 minutes ago      Running             kube-proxy                2                   50bc6f8ebaef6       kube-proxy-gzpg8
	13188f4cc8c02       6e38f40d628db       10 minutes ago      Exited              storage-provisioner       4                   54a3605feacc3       storage-provisioner
	82877f0ef29e3       5f1f5298c888d       10 minutes ago      Running             etcd                      2                   5313d39caef3b       etcd-ha-434755
	a5df53fd919b3       765655ea60781       10 minutes ago      Exited              kube-vip                  2                   ce17f68634ce0       kube-vip-ha-434755
	02dea945955e3       46169d968e920       10 minutes ago      Running             kube-scheduler            2                   b183ab1a64f4f       kube-scheduler-ha-434755
	dde1bdfac1986       a0af72f2ec6d6       10 minutes ago      Running             kube-controller-manager   2                   78f748319a213       kube-controller-manager-ha-434755
	fa6431499ef46       90550c43ad2bc       10 minutes ago      Running             kube-apiserver            2                   a34b5228df03a       kube-apiserver-ha-434755
	c9a94a8bca16c       409467f978b4a       19 minutes ago      Exited              kindnet-cni               1                   11b728526ee59       kindnet-djvx4
	9a99065ed6ffc       8c811b4aec35f       19 minutes ago      Exited              busybox                   1                   8d662a6a0cce0       busybox-7b57f96db7-v7khr
	d61ae6148e697       52546a367cc9e       19 minutes ago      Exited              coredns                   3                   16320e14d7e18       coredns-66bc5c9577-w8trg
	ad8e40cf82bf1       52546a367cc9e       19 minutes ago      Exited              coredns                   3                   0571a9b22aa8d       coredns-66bc5c9577-4lmln
	54785bb274bdd       df0860106674d       19 minutes ago      Exited              kube-proxy                1                   474504d27788a       kube-proxy-gzpg8
	53ac6087206b0       46169d968e920       19 minutes ago      Exited              kube-scheduler            1                   bd64b2298ea2e       kube-scheduler-ha-434755
	379f8eb19bc07       a0af72f2ec6d6       19 minutes ago      Exited              kube-controller-manager   1                   ee54e9ddf31eb       kube-controller-manager-ha-434755
	deaf26f878611       90550c43ad2bc       19 minutes ago      Exited              kube-apiserver            1                   0a6b58aa00fb3       kube-apiserver-ha-434755
	af499a9e8d13a       5f1f5298c888d       19 minutes ago      Exited              etcd                      1                   e3041d5d93037       etcd-ha-434755
	
	
	==> coredns [06b1353b1636] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59587 - 3865 "HINFO IN 7883568341015349980.8978188069860544667. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02630189s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [6274013f16f0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44274 - 40804 "HINFO IN 8430765228302789409.1551961486957642341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031951732s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [ad8e40cf82bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54656 - 31900 "HINFO IN 352629652807927435.4937880101774792236. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027954607s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d61ae6148e69] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33352 - 30613 "HINFO IN 7566855018603772192.7692448748435092535. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034224338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-434755
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:54:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:51:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:51:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:51:20 +0000   Fri, 19 Sep 2025 22:24:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:51:20 +0000   Fri, 19 Sep 2025 22:24:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-434755
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f961ade798940039d19025228bc692d
	  System UUID:                777ab209-7204-4aa7-96a4-31869ecf7396
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-v7khr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-66bc5c9577-4lmln             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 coredns-66bc5c9577-w8trg             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 etcd-ha-434755                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-djvx4                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-434755             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-434755    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-gzpg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-434755             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-434755                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           29m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-434755 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-434755 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-434755 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	  Normal  RegisteredNode           9m12s              node-controller  Node ha-434755 event: Registered Node ha-434755 in Controller
	
	
	Name:               ha-434755-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-434755-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-434755
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_25_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-434755-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:54:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:53:44 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:53:44 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:53:44 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:53:44 +0000   Fri, 19 Sep 2025 22:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-434755-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 8533065bf6444bf2b6790c96108131b8
	  System UUID:                515c6c02-eba2-449d-b3e2-53eaa5e2a2c5
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rhlg4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 etcd-ha-434755-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-74q9s                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-434755-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-434755-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-4cnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-434755-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-434755-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 17m                kube-proxy       
	  Normal  RegisteredNode           29m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           29m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  NodeNotReady             18m                node-controller  Node ha-434755-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           18m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-434755-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-434755-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	  Normal  RegisteredNode           9m12s              node-controller  Node ha-434755-m02 event: Registered Node ha-434755-m02 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 4e c7 de 18 97 08 06
	[  +3.920915] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[Sep19 22:17] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 92 b4 6c 9e 2e a2 08 06
	[  +0.000434] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 5e 22 ac 7f b0 08 06
	[  +0.000495] IPv4: martian source 10.244.0.32 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[  +0.000597] IPv4: martian source 10.244.0.32 from 10.244.0.8, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f6 c3 58 35 ff 7f 08 06
	[ +14.608947] IPv4: martian source 10.244.0.33 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 69 01 69 2f bf 08 06
	[  +1.598945] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 5a a6 ac 71 28 08 06
	[Sep19 22:20] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 b1 85 96 7b 86 08 06
	[Sep19 22:22] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 02 8f 31 b5 07 08 06
	[Sep19 22:23] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 66 98 c0 70 e0 08 06
	[Sep19 22:24] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 59 63 bf 9f 6e 08 06
	
	
	==> etcd [82877f0ef29e] <==
	{"level":"warn","ts":"2025-09-19T22:44:31.518399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.234288755s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-434755-m02\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-19T22:44:31.518473Z","caller":"traceutil/trace.go:172","msg":"trace[1374295348] range","detail":"{range_begin:/registry/minions/ha-434755-m02; range_end:; }","duration":"7.2349316s","start":"2025-09-19T22:44:24.283528Z","end":"2025-09-19T22:44:31.518460Z","steps":["trace[1374295348] 'agreement among raft nodes before linearized reading'  (duration: 7.233255005s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:44:31.518555Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:24.283491Z","time spent":"7.235045421s","remote":"127.0.0.1:38472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":0,"response size":0,"request content":"key:\"/registry/minions/ha-434755-m02\" limit:1 "}
	{"level":"warn","ts":"2025-09-19T22:44:31.518668Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"1.736149937s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-19T22:44:31.518722Z","caller":"traceutil/trace.go:172","msg":"trace[1053476471] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.737054393s","start":"2025-09-19T22:44:29.781656Z","end":"2025-09-19T22:44:31.518711Z","steps":["trace[1053476471] 'agreement among raft nodes before linearized reading'  (duration: 1.73512419s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:44:31.518758Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:29.781637Z","time spent":"1.737109189s","remote":"127.0.0.1:38180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-09-19T22:44:31.518903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.40394195s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-19T22:44:31.518945Z","caller":"traceutil/trace.go:172","msg":"trace[2110002600] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; }","duration":"7.405058752s","start":"2025-09-19T22:44:24.113876Z","end":"2025-09-19T22:44:31.518934Z","steps":["trace[2110002600] 'agreement among raft nodes before linearized reading'  (duration: 7.402910467s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:44:31.518976Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:24.113857Z","time spent":"7.405108473s","remote":"127.0.0.1:38480","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":0,"response size":0,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" limit:1 "}
	{"level":"warn","ts":"2025-09-19T22:44:31.519170Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.422889873s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2025-09-19T22:44:31.519220Z","caller":"traceutil/trace.go:172","msg":"trace[1256165761] range","detail":"{range_begin:; range_end:; }","duration":"7.424258534s","start":"2025-09-19T22:44:24.094954Z","end":"2025-09-19T22:44:31.519213Z","steps":["trace[1256165761] 'agreement among raft nodes before linearized reading'  (duration: 7.421834465s)"],"step_count":1}
	{"level":"error","ts":"2025-09-19T22:44:31.519250Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: leader changed\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2220\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2747\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3210\nnet/http.(*conn).serve\n\tnet/http/server.go:2092"}
	{"level":"info","ts":"2025-09-19T22:44:31.523250Z","caller":"traceutil/trace.go:172","msg":"trace[287670632] transaction","detail":"{read_only:false; response_revision:4514; number_of_response:1; }","duration":"2.124726255s","start":"2025-09-19T22:44:29.398508Z","end":"2025-09-19T22:44:31.523234Z","steps":["trace[287670632] 'process raft request'  (duration: 2.124621869s)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:44:31.523276Z","caller":"traceutil/trace.go:172","msg":"trace[697945422] transaction","detail":"{read_only:false; response_revision:4513; number_of_response:1; }","duration":"2.319641996s","start":"2025-09-19T22:44:29.203618Z","end":"2025-09-19T22:44:31.523260Z","steps":["trace[697945422] 'process raft request'  (duration: 2.319448914s)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:44:31.523336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"396.571415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-66bc5c9577-4lmln.1866d065bd7082d5\" limit:1 ","response":"range_response_count:1 size:780"}
	{"level":"info","ts":"2025-09-19T22:44:31.523365Z","caller":"traceutil/trace.go:172","msg":"trace[1429377945] range","detail":"{range_begin:/registry/events/kube-system/coredns-66bc5c9577-4lmln.1866d065bd7082d5; range_end:; response_count:1; response_revision:4514; }","duration":"396.603449ms","start":"2025-09-19T22:44:31.126752Z","end":"2025-09-19T22:44:31.523356Z","steps":["trace[1429377945] 'agreement among raft nodes before linearized reading'  (duration: 396.508123ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:44:31.523385Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:31.126738Z","time spent":"396.640059ms","remote":"127.0.0.1:38294","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":804,"request content":"key:\"/registry/events/kube-system/coredns-66bc5c9577-4lmln.1866d065bd7082d5\" limit:1 "}
	{"level":"warn","ts":"2025-09-19T22:44:31.523738Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:29.398469Z","time spent":"2.124819336s","remote":"127.0.0.1:38626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":674,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-sa3myi634pjbqcpp7lmaypzwwu\" mod_revision:4505 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-sa3myi634pjbqcpp7lmaypzwwu\" value_size:601 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-sa3myi634pjbqcpp7lmaypzwwu\" > >"}
	{"level":"warn","ts":"2025-09-19T22:44:31.523817Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:29.203598Z","time spent":"2.319713546s","remote":"127.0.0.1:38626","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":524,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-434755\" mod_revision:4504 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-434755\" value_size:474 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-434755\" > >"}
	{"level":"warn","ts":"2025-09-19T22:44:31.523407Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"403.585368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-09-19T22:44:31.524033Z","caller":"traceutil/trace.go:172","msg":"trace[1985771538] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:4514; }","duration":"404.203145ms","start":"2025-09-19T22:44:31.119806Z","end":"2025-09-19T22:44:31.524009Z","steps":["trace[1985771538] 'agreement among raft nodes before linearized reading'  (duration: 403.520779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:44:31.524065Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:44:31.119791Z","time spent":"404.260974ms","remote":"127.0.0.1:38248","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":155,"request content":"key:\"/registry/masterleases/192.168.49.2\" limit:1 "}
	{"level":"info","ts":"2025-09-19T22:53:50.534326Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":5197}
	{"level":"info","ts":"2025-09-19T22:53:50.614274Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":5197,"took":"79.384907ms","hash":15567448,"current-db-size-bytes":8712192,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":1974272,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-09-19T22:53:50.614327Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":15567448,"revision":5197,"compact-revision":-1}
	
	
	==> etcd [af499a9e8d13] <==
	{"level":"warn","ts":"2025-09-19T22:43:28.710620Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:43:28.134490Z","time spent":"576.116112ms","remote":"127.0.0.1:59032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":29,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts\" limit:1 "}
	2025/09/19 22:43:28 WARNING: [core] [Server #5]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"error","ts":"2025-09-19T22:43:28.795621Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-19T22:43:28.795723Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:43:28.795768Z","caller":"etcdserver/server.go:1272","msg":"skipped leadership transfer; local server is not leader","local-member-id":"aec36adc501070cc","current-leader-member-id":"0"}
	{"level":"warn","ts":"2025-09-19T22:43:28.795817Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-19T22:43:28.795833Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:43:28.795839Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:43:28.795850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:43:28.795857Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:43:28.795821Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:43:28.795873Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:43:28.795883Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:43:28.795910Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"warn","ts":"2025-09-19T22:43:28.796079Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.796099Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.796117Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.796143Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.796219Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.796263Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.796271Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"a99fbed258953a7f"}
	{"level":"info","ts":"2025-09-19T22:43:28.798111Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-19T22:43:28.798175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:43:28.798212Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-19T22:43:28.798226Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-434755","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:54:22 up  1:36,  0 users,  load average: 0.37, 0.52, 6.36
	Linux ha-434755 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c9a94a8bca16] <==
	I0919 22:42:48.398788       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:48.398985       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:48.399000       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:42:48.399101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:48.399114       1 main.go:301] handling current node
	I0919 22:42:58.397777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:42:58.397818       1 main.go:301] handling current node
	I0919 22:42:58.397838       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:42:58.397844       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:42:58.398040       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:42:58.398053       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:43:08.398799       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:43:08.398837       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:43:08.399068       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:43:08.399086       1 main.go:324] Node ha-434755-m03 has CIDR [10.244.2.0/24] 
	I0919 22:43:08.399203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:43:08.399215       1 main.go:301] handling current node
	I0919 22:43:18.398525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:43:18.398559       1 main.go:301] handling current node
	I0919 22:43:18.398578       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:43:18.398583       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:43:28.398479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:43:28.398617       1 main.go:301] handling current node
	I0919 22:43:28.398742       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:43:28.398951       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cf132ef265d8] <==
	I0919 22:53:13.886756       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:53:23.887414       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:53:23.887448       1 main.go:301] handling current node
	I0919 22:53:23.887465       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:53:23.887470       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:53:33.889567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:53:33.889607       1 main.go:301] handling current node
	I0919 22:53:33.889627       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:53:33.889633       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:53:43.886857       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:53:43.886891       1 main.go:301] handling current node
	I0919 22:53:43.886909       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:53:43.886913       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:53:53.889625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:53:53.889655       1 main.go:301] handling current node
	I0919 22:53:53.889672       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:53:53.889677       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:54:03.894596       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:54:03.894628       1 main.go:301] handling current node
	I0919 22:54:03.894644       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:54:03.894648       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	I0919 22:54:13.886650       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:54:13.886682       1 main.go:301] handling current node
	I0919 22:54:13.886698       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:54:13.886705       1 main.go:324] Node ha-434755-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [deaf26f87861] <==
	W0919 22:43:28.533328       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.533400       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.533640       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.533742       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:43:28.533783       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0919 22:43:28.533810       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.533869       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.533943       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.533968       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:43:28.534009       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:43:28.534069       1 watcher.go:335] watch chan error: etcdserver: no leader
	W0919 22:43:28.534210       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.534398       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 22:43:28.534478       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0919 22:43:28.535255       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-09-19T22:43:28.543028Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0018c2d20/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 22:43:28.543152       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0919 22:43:28.543351       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 22:43:28.543522       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" auditID="27881c71-d5dd-4843-a350-b8a136195743"
	E0919 22:43:28.543549       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.679µs" method="GET" path="/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result=null
	{"level":"warn","ts":"2025-09-19T22:43:28.543036Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0023ae5a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 22:43:28.543969       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 236.558µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 22:43:28.544219       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 22:43:28.546039       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 22:43:28.546151       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.507516ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	
	
	==> kube-apiserver [fa6431499ef4] <==
	E0919 22:44:31.510221       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:44:31.510395       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:44:31.510405       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:44:31.513420       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:44:31.513464       1 watcher.go:335] watch chan error: etcdserver: no leader
	E0919 22:44:31.513476       1 watcher.go:335] watch chan error: etcdserver: no leader
	{"level":"warn","ts":"2025-09-19T22:44:31.518879Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00081f680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-09-19T22:44:31.519221Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00040e780/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-09-19T22:44:31.519450Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b92b40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	I0919 22:45:05.454417       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:18.828634       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:17.018553       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:46:28.029151       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:47:19.389403       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:47:56.848966       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:48:46.227297       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:49:16.488831       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:50:12.655544       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:50:39.362433       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:51:38.603926       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:52:06.376112       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:52:51.319295       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:53:20.520982       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:53:51.575121       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:54:00.535266       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [379f8eb19bc0] <==
	I0919 22:35:00.473248       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:35:00.473274       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:35:00.473273       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:35:00.473294       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:35:00.473349       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:35:00.473933       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:35:00.473968       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:35:00.477672       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 22:35:00.477725       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 22:35:00.477771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 22:35:00.477781       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 22:35:00.477781       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:35:00.477788       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 22:35:00.486920       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 22:35:00.489123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 22:35:00.491334       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:35:00.493617       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:35:00.495803       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 22:35:00.498093       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:35:00.499331       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 22:43:20.484193       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:43:20.484249       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:43:20.484258       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:43:20.484264       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:43:20.484271       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	
	
	==> kube-controller-manager [dde1bdfac198] <==
	I0919 22:43:54.970845       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	E0919 22:44:14.899260       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:14.899291       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:14.899297       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:14.899304       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:14.899310       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:34.899994       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:34.900019       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:34.900024       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:34.900029       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	E0919 22:44:34.900033       1 gc_controller.go:151] "Failed to get node" err="node \"ha-434755-m03\" not found" logger="pod-garbage-collector-controller" node="ha-434755-m03"
	I0919 22:44:34.909304       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-434755-m03"
	I0919 22:44:34.925861       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-434755-m03"
	I0919 22:44:34.925898       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-434755-m03"
	I0919 22:44:34.940787       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-434755-m03"
	I0919 22:44:34.940817       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jrkrv"
	I0919 22:44:34.954356       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-jrkrv"
	I0919 22:44:34.954380       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-434755-m03"
	I0919 22:44:34.969358       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-434755-m03"
	I0919 22:44:34.969385       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dzrbh"
	I0919 22:44:34.986522       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dzrbh"
	I0919 22:44:34.986550       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-434755-m03"
	I0919 22:44:34.999671       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-434755-m03"
	I0919 22:44:34.999701       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-434755-m03"
	I0919 22:44:35.013837       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-434755-m03"
	
	
	==> kube-proxy [4ade0812e45d] <==
	I0919 22:43:53.190687       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:43:53.278732       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:43:53.378934       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:43:53.379002       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:43:53.379543       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:43:53.410493       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:43:53.410591       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:43:53.417605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:43:53.418115       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:43:53.418215       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:43:53.420980       1 config.go:200] "Starting service config controller"
	I0919 22:43:53.421002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:43:53.421235       1 config.go:309] "Starting node config controller"
	I0919 22:43:53.421781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:43:53.421969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:43:53.421231       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:43:53.422038       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:43:53.421266       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:43:53.422063       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:43:53.521673       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:43:53.522865       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:43:53.522890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [54785bb274bd] <==
	I0919 22:34:57.761058       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:34:57.833193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:35:00.913912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-434755&limit=500&resourceVersion=0\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0919 22:35:01.834138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:35:01.834169       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:35:01.834256       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:35:01.855270       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:35:01.855328       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:35:01.860764       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:35:01.861199       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:35:01.861231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:35:01.862567       1 config.go:200] "Starting service config controller"
	I0919 22:35:01.862599       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:35:01.862627       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:35:01.862658       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:35:01.862680       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:35:01.862685       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:35:01.862736       1 config.go:309] "Starting node config controller"
	I0919 22:35:01.863095       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:35:01.863114       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:35:01.963632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:35:01.963649       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:35:01.963870       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [02dea945955e] <==
	I0919 22:43:49.216252       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:43:51.544381       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:43:51.544425       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:43:51.544438       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:43:51.544447       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:43:51.587440       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:43:51.587477       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:43:51.590454       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:43:51.590631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:43:51.590870       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:43:51.590965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:43:51.691672       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [53ac6087206b] <==
	I0919 22:34:38.691784       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:34:49.254859       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0919 22:34:49.254890       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:34:49.254896       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:34:56.962003       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:34:56.962030       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:34:56.963821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.963864       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:34:56.964116       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:34:56.964511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:34:57.064621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:43:28.518608       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:43:28.518681       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:43:28.518923       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:43:28.518931       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:43:28.518951       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 22:52:17 ha-434755 kubelet[1383]: E0919 22:52:17.663010    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055183 maxSize=10485760
	Sep 19 22:52:27 ha-434755 kubelet[1383]: E0919 22:52:27.667386    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:52:27 ha-434755 kubelet[1383]: E0919 22:52:27.667491    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055183 maxSize=10485760
	Sep 19 22:52:37 ha-434755 kubelet[1383]: E0919 22:52:37.671821    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:52:37 ha-434755 kubelet[1383]: E0919 22:52:37.671925    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055183 maxSize=10485760
	Sep 19 22:52:47 ha-434755 kubelet[1383]: E0919 22:52:47.676199    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:52:47 ha-434755 kubelet[1383]: E0919 22:52:47.676288    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055183 maxSize=10485760
	Sep 19 22:52:57 ha-434755 kubelet[1383]: E0919 22:52:57.678419    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:52:57 ha-434755 kubelet[1383]: E0919 22:52:57.678539    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055348 maxSize=10485760
	Sep 19 22:53:07 ha-434755 kubelet[1383]: E0919 22:53:07.682772    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:53:07 ha-434755 kubelet[1383]: E0919 22:53:07.682861    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055348 maxSize=10485760
	Sep 19 22:53:17 ha-434755 kubelet[1383]: E0919 22:53:17.688119    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:53:17 ha-434755 kubelet[1383]: E0919 22:53:17.688225    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055348 maxSize=10485760
	Sep 19 22:53:27 ha-434755 kubelet[1383]: E0919 22:53:27.692549    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:53:27 ha-434755 kubelet[1383]: E0919 22:53:27.692656    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055513 maxSize=10485760
	Sep 19 22:53:37 ha-434755 kubelet[1383]: E0919 22:53:37.695980    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:53:37 ha-434755 kubelet[1383]: E0919 22:53:37.696065    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055513 maxSize=10485760
	Sep 19 22:53:47 ha-434755 kubelet[1383]: E0919 22:53:47.702083    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:53:47 ha-434755 kubelet[1383]: E0919 22:53:47.702160    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055513 maxSize=10485760
	Sep 19 22:53:57 ha-434755 kubelet[1383]: E0919 22:53:57.705264    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:53:57 ha-434755 kubelet[1383]: E0919 22:53:57.705368    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055693 maxSize=10485760
	Sep 19 22:54:07 ha-434755 kubelet[1383]: E0919 22:54:07.710170    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:54:07 ha-434755 kubelet[1383]: E0919 22:54:07.710265    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055856 maxSize=10485760
	Sep 19 22:54:17 ha-434755 kubelet[1383]: E0919 22:54:17.713795    1383 log.go:32] "ReopenContainerLog from runtime service failed" err="rpc error: code = Unknown desc = docker does not support reopening container log files" containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d"
	Sep 19 22:54:17 ha-434755 kubelet[1383]: E0919 22:54:17.713893    1383 container_log_manager.go:263] "Failed to rotate log for container" err="failed to rotate log \"/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log\": failed to reopen container log \"fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d\": rpc error: code = Unknown desc = docker does not support reopening container log files" worker=1 containerID="fa6431499ef46992e3381e81d8ae5dcf044a3dc7ce6653adc991a546c3eb832d" path="/var/log/pods/kube-system_kube-apiserver-ha-434755_4fa94191354aa96f359ef3adf3824d29/kube-apiserver/2.log" currentSize=19055856 maxSize=10485760
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-434755 -n ha-434755
helpers_test.go:269: (dbg) Run:  kubectl --context ha-434755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hhbsb
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-434755 describe pod busybox-7b57f96db7-hhbsb
helpers_test.go:290: (dbg) kubectl --context ha-434755 describe pod busybox-7b57f96db7-hhbsb:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hhbsb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwqfz (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-rwqfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  10m                    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11m (x2 over 11m)      default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11m (x2 over 11m)      default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11m (x2 over 11m)      default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  4m39s (x2 over 9m39s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  32s (x3 over 10m)      default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (643.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (273.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p bridge-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: exit status 80 (4m33.297411792s)

                                                
                                                
-- stdout --
	* [bridge-361266] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "bridge-361266" primary control-plane node in "bridge-361266" cluster
	* Pulling base image v0.0.48 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:18:35.738351  594826 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:18:35.738639  594826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:18:35.738651  594826 out.go:374] Setting ErrFile to fd 2...
	I0919 23:18:35.738655  594826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:18:35.738902  594826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:18:35.739519  594826 out.go:368] Setting JSON to false
	I0919 23:18:35.741194  594826 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7252,"bootTime":1758316664,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:18:35.741336  594826 start.go:140] virtualization: kvm guest
	I0919 23:18:35.743465  594826 out.go:179] * [bridge-361266] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:18:35.744869  594826 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:18:35.744891  594826 notify.go:220] Checking for updates...
	I0919 23:18:35.747632  594826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:18:35.749185  594826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:18:35.750231  594826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 23:18:35.751567  594826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:18:35.752916  594826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:18:35.754732  594826 config.go:182] Loaded profile config "enable-default-cni-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:35.754845  594826 config.go:182] Loaded profile config "false-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:35.754928  594826 config.go:182] Loaded profile config "flannel-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:35.755048  594826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:18:35.780374  594826 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:18:35.780476  594826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:18:35.845786  594826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:82 SystemTime:2025-09-19 23:18:35.832984923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:18:35.845908  594826 docker.go:318] overlay module found
	I0919 23:18:35.848097  594826 out.go:179] * Using the docker driver based on user configuration
	I0919 23:18:35.849118  594826 start.go:304] selected driver: docker
	I0919 23:18:35.849137  594826 start.go:918] validating driver "docker" against <nil>
	I0919 23:18:35.849149  594826 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:18:35.849794  594826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:18:35.911827  594826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:82 SystemTime:2025-09-19 23:18:35.900406484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:18:35.912090  594826 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:18:35.912406  594826 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:18:35.914240  594826 out.go:179] * Using Docker driver with root privileges
	I0919 23:18:35.915459  594826 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:18:35.915488  594826 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:18:35.915649  594826 start.go:348] cluster config:
	{Name:bridge-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0919 23:18:35.917109  594826 out.go:179] * Starting "bridge-361266" primary control-plane node in "bridge-361266" cluster
	I0919 23:18:35.918288  594826 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 23:18:35.919561  594826 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:18:35.920603  594826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:18:35.920666  594826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 23:18:35.920680  594826 cache.go:58] Caching tarball of preloaded images
	I0919 23:18:35.920716  594826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:18:35.920791  594826 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:18:35.920808  594826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 23:18:35.920964  594826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/config.json ...
	I0919 23:18:35.921007  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/config.json: {Name:mk6d9236065d16557c32497ad8aa443f94f7041b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:35.943082  594826 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:18:35.943103  594826 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:18:35.943121  594826 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:18:35.943156  594826 start.go:360] acquireMachinesLock for bridge-361266: {Name:mk2db029e0666af55b193558716a21aae8c3ae9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:18:35.943283  594826 start.go:364] duration metric: took 105.716µs to acquireMachinesLock for "bridge-361266"
	I0919 23:18:35.943316  594826 start.go:93] Provisioning new machine with config: &{Name:bridge-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:18:35.943386  594826 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:18:35.945600  594826 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:18:35.945852  594826 start.go:159] libmachine.API.Create for "bridge-361266" (driver="docker")
	I0919 23:18:35.945889  594826 client.go:168] LocalClient.Create starting
	I0919 23:18:35.945957  594826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 23:18:35.945996  594826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:18:35.946019  594826 main.go:141] libmachine: Parsing certificate...
	I0919 23:18:35.946095  594826 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 23:18:35.946124  594826 main.go:141] libmachine: Decoding PEM data...
	I0919 23:18:35.946140  594826 main.go:141] libmachine: Parsing certificate...
	I0919 23:18:35.946642  594826 cli_runner.go:164] Run: docker network inspect bridge-361266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:18:35.964907  594826 cli_runner.go:211] docker network inspect bridge-361266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:18:35.964978  594826 network_create.go:284] running [docker network inspect bridge-361266] to gather additional debugging logs...
	I0919 23:18:35.964995  594826 cli_runner.go:164] Run: docker network inspect bridge-361266
	W0919 23:18:35.984645  594826 cli_runner.go:211] docker network inspect bridge-361266 returned with exit code 1
	I0919 23:18:35.984692  594826 network_create.go:287] error running [docker network inspect bridge-361266]: docker network inspect bridge-361266: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-361266 not found
	I0919 23:18:35.984710  594826 network_create.go:289] output of [docker network inspect bridge-361266]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-361266 not found
	
	** /stderr **
	I0919 23:18:35.984848  594826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:18:36.006231  594826 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db7021220859 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:a3:92:23:56:8a} reservation:<nil>}
	I0919 23:18:36.007233  594826 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-683ec4c6685e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:9d:60:92:e5:85} reservation:<nil>}
	I0919 23:18:36.008338  594826 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b9a40fa74e58 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:8a:56:fb:db:9d} reservation:<nil>}
	I0919 23:18:36.009240  594826 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c04692c8d5c2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:5a:82:94:29:f8} reservation:<nil>}
	I0919 23:18:36.010040  594826 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-3c89de73ae09 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:63:89:43:d8:83} reservation:<nil>}
	I0919 23:18:36.010951  594826 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c93210}
	I0919 23:18:36.010977  594826 network_create.go:124] attempt to create docker network bridge-361266 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0919 23:18:36.011029  594826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-361266 bridge-361266
	I0919 23:18:36.073977  594826 network_create.go:108] docker network bridge-361266 192.168.94.0/24 created
	I0919 23:18:36.074007  594826 kic.go:121] calculated static IP "192.168.94.2" for the "bridge-361266" container
	I0919 23:18:36.074066  594826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:18:36.095169  594826 cli_runner.go:164] Run: docker volume create bridge-361266 --label name.minikube.sigs.k8s.io=bridge-361266 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:18:36.115216  594826 oci.go:103] Successfully created a docker volume bridge-361266
	I0919 23:18:36.115315  594826 cli_runner.go:164] Run: docker run --rm --name bridge-361266-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-361266 --entrypoint /usr/bin/test -v bridge-361266:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:18:38.867709  594826 cli_runner.go:217] Completed: docker run --rm --name bridge-361266-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-361266 --entrypoint /usr/bin/test -v bridge-361266:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.752314096s)
	I0919 23:18:38.867747  594826 oci.go:107] Successfully prepared a docker volume bridge-361266
	I0919 23:18:38.867774  594826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:18:38.867799  594826 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:18:38.867861  594826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-361266:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:18:41.587898  594826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v bridge-361266:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.719966629s)
	I0919 23:18:41.587949  594826 kic.go:203] duration metric: took 2.720144669s to extract preloaded images to volume ...
	W0919 23:18:41.588098  594826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:18:41.588149  594826 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:18:41.588208  594826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:18:41.651985  594826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-361266 --name bridge-361266 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-361266 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-361266 --network bridge-361266 --ip 192.168.94.2 --volume bridge-361266:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:18:42.020115  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Running}}
	I0919 23:18:42.041947  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Status}}
	I0919 23:18:42.065122  594826 cli_runner.go:164] Run: docker exec bridge-361266 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:18:42.117751  594826 oci.go:144] the created container "bridge-361266" has a running status.
	I0919 23:18:42.117795  594826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa...
	I0919 23:18:42.327133  594826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:18:42.428150  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Status}}
	I0919 23:18:42.453341  594826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:18:42.453369  594826 kic_runner.go:114] Args: [docker exec --privileged bridge-361266 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:18:42.527865  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Status}}
	I0919 23:18:42.549355  594826 machine.go:93] provisionDockerMachine start ...
	I0919 23:18:42.549483  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:42.569279  594826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:42.569589  594826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33116 <nil> <nil>}
	I0919 23:18:42.569606  594826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:18:42.719716  594826 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-361266
	
	I0919 23:18:42.719745  594826 ubuntu.go:182] provisioning hostname "bridge-361266"
	I0919 23:18:42.719892  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:42.742859  594826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:42.743181  594826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33116 <nil> <nil>}
	I0919 23:18:42.743215  594826 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-361266 && echo "bridge-361266" | sudo tee /etc/hostname
	I0919 23:18:42.910548  594826 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-361266
	
	I0919 23:18:42.910616  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:42.935286  594826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:42.935597  594826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33116 <nil> <nil>}
	I0919 23:18:42.935626  594826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-361266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-361266/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-361266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:18:43.082921  594826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:18:43.082967  594826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:18:43.083008  594826 ubuntu.go:190] setting up certificates
	I0919 23:18:43.083023  594826 provision.go:84] configureAuth start
	I0919 23:18:43.083087  594826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-361266
	I0919 23:18:43.102700  594826 provision.go:143] copyHostCerts
	I0919 23:18:43.102760  594826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:18:43.102768  594826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:18:43.102832  594826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:18:43.102963  594826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:18:43.102974  594826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:18:43.103022  594826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:18:43.103118  594826 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:18:43.103129  594826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:18:43.103157  594826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:18:43.103256  594826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.bridge-361266 san=[127.0.0.1 192.168.94.2 bridge-361266 localhost minikube]
	I0919 23:18:43.236651  594826 provision.go:177] copyRemoteCerts
	I0919 23:18:43.236733  594826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:18:43.236789  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:43.259992  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:18:43.367211  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:18:43.398024  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 23:18:43.427688  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:18:43.460989  594826 provision.go:87] duration metric: took 377.948362ms to configureAuth
	I0919 23:18:43.461025  594826 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:18:43.461220  594826 config.go:182] Loaded profile config "bridge-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:43.461299  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:43.482409  594826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:43.482816  594826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33116 <nil> <nil>}
	I0919 23:18:43.482839  594826 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:18:43.630507  594826 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:18:43.630537  594826 ubuntu.go:71] root file system type: overlay
	I0919 23:18:43.630647  594826 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:18:43.630727  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:43.651060  594826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:43.651345  594826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33116 <nil> <nil>}
	I0919 23:18:43.651411  594826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:18:43.822318  594826 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:18:43.822424  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:43.846143  594826 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:43.846639  594826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33116 <nil> <nil>}
	I0919 23:18:43.846695  594826 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:18:45.845670  594826 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 23:18:43.819307934 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 23:18:45.845707  594826 machine.go:96] duration metric: took 3.296325847s to provisionDockerMachine
	I0919 23:18:45.845723  594826 client.go:171] duration metric: took 9.899823679s to LocalClient.Create
	I0919 23:18:45.845740  594826 start.go:167] duration metric: took 9.899888819s to libmachine.API.Create "bridge-361266"
	I0919 23:18:45.845750  594826 start.go:293] postStartSetup for "bridge-361266" (driver="docker")
	I0919 23:18:45.845763  594826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:18:45.845834  594826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:18:45.845882  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:45.873008  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:18:45.980624  594826 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:18:45.984843  594826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:18:45.984913  594826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:18:45.984935  594826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:18:45.984948  594826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:18:45.984978  594826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:18:45.985056  594826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:18:45.985164  594826 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:18:45.985308  594826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:18:45.996519  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:18:46.040876  594826 start.go:296] duration metric: took 195.109197ms for postStartSetup
	I0919 23:18:46.041373  594826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-361266
	I0919 23:18:46.063915  594826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/config.json ...
	I0919 23:18:46.064220  594826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:18:46.064273  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:46.087798  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:18:46.186825  594826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:18:46.191992  594826 start.go:128] duration metric: took 10.248585273s to createHost
	I0919 23:18:46.192023  594826 start.go:83] releasing machines lock for "bridge-361266", held for 10.248724384s
	I0919 23:18:46.192089  594826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-361266
	I0919 23:18:46.213473  594826 ssh_runner.go:195] Run: cat /version.json
	I0919 23:18:46.213556  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:46.213555  594826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:18:46.213643  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:18:46.239711  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:18:46.239715  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:18:46.337923  594826 ssh_runner.go:195] Run: systemctl --version
	I0919 23:18:46.423877  594826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:18:46.429191  594826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:18:46.464492  594826 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:18:46.464616  594826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:18:46.498652  594826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:18:46.498742  594826 start.go:495] detecting cgroup driver to use...
	I0919 23:18:46.498793  594826 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:18:46.498920  594826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:18:46.521858  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:18:46.535038  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:18:46.548948  594826 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:18:46.549013  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:18:46.561004  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:18:46.574018  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:18:46.585921  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:18:46.598385  594826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:18:46.610013  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:18:46.622876  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:18:46.635183  594826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:18:46.647536  594826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:18:46.657864  594826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:18:46.668122  594826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:46.743345  594826 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:18:46.836461  594826 start.go:495] detecting cgroup driver to use...
	I0919 23:18:46.836536  594826 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:18:46.836598  594826 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:18:46.851567  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:18:46.865402  594826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:18:46.888425  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:18:46.904152  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:18:46.919556  594826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:18:46.940301  594826 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:18:46.944262  594826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:18:46.956738  594826 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:18:46.977635  594826 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:18:47.058181  594826 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:18:47.129748  594826 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:18:47.129877  594826 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:18:47.150324  594826 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:18:47.163700  594826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:47.241675  594826 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:18:48.050709  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:18:48.063768  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:18:48.077990  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:18:48.093717  594826 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:18:48.177073  594826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:18:48.258562  594826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:48.331299  594826 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:18:48.360426  594826 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:18:48.373460  594826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:48.454405  594826 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:18:48.542971  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:18:48.557860  594826 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:18:48.557943  594826 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:18:48.562492  594826 start.go:563] Will wait 60s for crictl version
	I0919 23:18:48.562596  594826 ssh_runner.go:195] Run: which crictl
	I0919 23:18:48.566749  594826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:18:48.613785  594826 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:18:48.613863  594826 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:18:48.642667  594826 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:18:48.672672  594826 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:18:48.672753  594826 cli_runner.go:164] Run: docker network inspect bridge-361266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:18:48.692002  594826 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:18:48.696724  594826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:18:48.710615  594826 kubeadm.go:875] updating cluster {Name:bridge-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:18:48.710758  594826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:18:48.710832  594826 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:18:48.736050  594826 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:18:48.736076  594826 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:18:48.736128  594826 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:18:48.759581  594826 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:18:48.759605  594826 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:18:48.759619  594826 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0919 23:18:48.759724  594826 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-361266 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0919 23:18:48.759784  594826 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:18:48.819051  594826 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:18:48.819080  594826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:18:48.819113  594826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-361266 NodeName:bridge-361266 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:18:48.819273  594826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "bridge-361266"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:18:48.819352  594826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:18:48.830547  594826 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:18:48.830609  594826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:18:48.840992  594826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0919 23:18:48.862473  594826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:18:48.883574  594826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0919 23:18:48.904942  594826 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:18:48.909363  594826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:18:48.923336  594826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:49.013775  594826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:18:49.042482  594826 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266 for IP: 192.168.94.2
	I0919 23:18:49.042524  594826 certs.go:194] generating shared ca certs ...
	I0919 23:18:49.042545  594826 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.042721  594826 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:18:49.042787  594826 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:18:49.042801  594826 certs.go:256] generating profile certs ...
	I0919 23:18:49.042943  594826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/client.key
	I0919 23:18:49.042969  594826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/client.crt with IP's: []
	I0919 23:18:49.218724  594826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/client.crt ...
	I0919 23:18:49.218753  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/client.crt: {Name:mkd106ca1d8fcce0e1092e9397ec3dc4f00f6b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.218962  594826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/client.key ...
	I0919 23:18:49.218985  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/client.key: {Name:mkb03c2c9843a6287dbfea42904c1e82b5645f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.219102  594826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.key.17d88b5a
	I0919 23:18:49.219119  594826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.crt.17d88b5a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0919 23:18:49.311184  594826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.crt.17d88b5a ...
	I0919 23:18:49.311216  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.crt.17d88b5a: {Name:mk44dd9846c8de8286a604285bf45a775ebc49da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.311398  594826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.key.17d88b5a ...
	I0919 23:18:49.311420  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.key.17d88b5a: {Name:mk7581020860d50b75cf6cdaefaecab3a3993005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.311547  594826 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.crt.17d88b5a -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.crt
	I0919 23:18:49.311654  594826 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.key.17d88b5a -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.key
	I0919 23:18:49.311717  594826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.key
	I0919 23:18:49.311732  594826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.crt with IP's: []
	I0919 23:18:49.638556  594826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.crt ...
	I0919 23:18:49.638591  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.crt: {Name:mka4cd8c169cc9034af528f51c4a444ddc19aa71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.638825  594826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.key ...
	I0919 23:18:49.638849  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.key: {Name:mk350ea8299cfb88a208f0f6ec701e3b9747177e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:49.639090  594826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:18:49.639135  594826 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:18:49.639147  594826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:18:49.639183  594826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:18:49.639209  594826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:18:49.639234  594826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:18:49.639287  594826 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:18:49.639941  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:18:49.667967  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:18:49.695539  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:18:49.725979  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:18:49.756858  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 23:18:49.784671  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:18:49.812589  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:18:49.841651  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/bridge-361266/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:18:49.870274  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:18:49.902218  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:18:49.931958  594826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:18:49.961397  594826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:18:49.987112  594826 ssh_runner.go:195] Run: openssl version
	I0919 23:18:49.994329  594826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:18:50.009017  594826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:18:50.015178  594826 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:18:50.015261  594826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:18:50.023062  594826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:18:50.034554  594826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:18:50.045883  594826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:18:50.050304  594826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:18:50.050431  594826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:18:50.058334  594826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:18:50.069839  594826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:18:50.081975  594826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:18:50.086526  594826 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:18:50.086601  594826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:18:50.094304  594826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:18:50.106218  594826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:18:50.110545  594826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:18:50.110603  594826 kubeadm.go:392] StartCluster: {Name:bridge-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:18:50.110726  594826 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:18:50.135480  594826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:18:50.146135  594826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:18:50.157047  594826 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:18:50.157109  594826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:18:50.168157  594826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:18:50.168182  594826 kubeadm.go:157] found existing configuration files:
	
	I0919 23:18:50.168226  594826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:18:50.178826  594826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:18:50.178890  594826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:18:50.190256  594826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:18:50.205472  594826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:18:50.205583  594826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:18:50.216759  594826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:18:50.227818  594826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:18:50.227882  594826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:18:50.238329  594826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:18:50.249644  594826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:18:50.249714  594826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:18:50.259885  594826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:18:50.306600  594826 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:18:50.306675  594826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:18:50.328986  594826 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:18:50.329064  594826 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:18:50.329114  594826 kubeadm.go:310] OS: Linux
	I0919 23:18:50.329196  594826 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:18:50.329313  594826 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:18:50.329365  594826 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:18:50.329404  594826 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:18:50.329475  594826 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:18:50.329580  594826 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:18:50.329648  594826 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:18:50.329718  594826 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:18:50.403377  594826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:18:50.403560  594826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:18:50.403697  594826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:18:50.418393  594826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:18:50.421284  594826 out.go:252]   - Generating certificates and keys ...
	I0919 23:18:50.421385  594826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:18:50.421481  594826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:18:50.677186  594826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:18:51.078878  594826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:18:51.129011  594826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:18:51.680495  594826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:18:52.093033  594826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:18:52.093202  594826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-361266 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:18:52.578730  594826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:18:52.578934  594826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-361266 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:18:53.030070  594826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:18:53.163266  594826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:18:53.576029  594826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:18:53.576255  594826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:18:53.713806  594826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:18:53.972825  594826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:18:54.390976  594826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:18:54.548774  594826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:18:55.026933  594826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:18:55.027050  594826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:18:55.031960  594826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:18:55.033535  594826 out.go:252]   - Booting up control plane ...
	I0919 23:18:55.033677  594826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:18:55.033786  594826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:18:55.035236  594826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:18:55.053421  594826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:18:55.053676  594826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:18:55.064744  594826 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:18:55.065097  594826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:18:55.065177  594826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:18:55.199555  594826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:18:55.199697  594826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:18:55.700245  594826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.052392ms
	I0919 23:18:55.703246  594826 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:18:55.703374  594826 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0919 23:18:55.703545  594826 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:18:55.703655  594826 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:18:57.514644  594826 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.811191122s
	I0919 23:18:58.824694  594826 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.121306942s
	I0919 23:19:00.704911  594826 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001614495s
	I0919 23:19:00.717856  594826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:19:00.731657  594826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:19:00.742976  594826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:19:00.743266  594826 kubeadm.go:310] [mark-control-plane] Marking the node bridge-361266 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:19:00.754792  594826 kubeadm.go:310] [bootstrap-token] Using token: vabf34.g2egwscaqi2lxldu
	I0919 23:19:00.756319  594826 out.go:252]   - Configuring RBAC rules ...
	I0919 23:19:00.756550  594826 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:19:00.761320  594826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:19:00.769976  594826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:19:00.773073  594826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:19:00.776334  594826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:19:00.779684  594826 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:19:01.111794  594826 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:19:01.543089  594826 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:19:02.117873  594826 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:19:02.117898  594826 kubeadm.go:310] 
	I0919 23:19:02.117986  594826 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:19:02.117992  594826 kubeadm.go:310] 
	I0919 23:19:02.118101  594826 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:19:02.118107  594826 kubeadm.go:310] 
	I0919 23:19:02.118143  594826 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:19:02.118224  594826 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:19:02.118294  594826 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:19:02.118299  594826 kubeadm.go:310] 
	I0919 23:19:02.118376  594826 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:19:02.118381  594826 kubeadm.go:310] 
	I0919 23:19:02.118448  594826 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:19:02.118454  594826 kubeadm.go:310] 
	I0919 23:19:02.118545  594826 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:19:02.118649  594826 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:19:02.118744  594826 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:19:02.118750  594826 kubeadm.go:310] 
	I0919 23:19:02.118886  594826 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:19:02.118995  594826 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:19:02.119000  594826 kubeadm.go:310] 
	I0919 23:19:02.119117  594826 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vabf34.g2egwscaqi2lxldu \
	I0919 23:19:02.119264  594826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 23:19:02.119292  594826 kubeadm.go:310] 	--control-plane 
	I0919 23:19:02.119298  594826 kubeadm.go:310] 
	I0919 23:19:02.119417  594826 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:19:02.119423  594826 kubeadm.go:310] 
	I0919 23:19:02.119557  594826 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vabf34.g2egwscaqi2lxldu \
	I0919 23:19:02.119703  594826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 23:19:02.127137  594826 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:19:02.127293  594826 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:19:02.127319  594826 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:19:02.129460  594826 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 23:19:02.131471  594826 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:19:02.144781  594826 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:19:02.172328  594826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:19:02.172534  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-361266 minikube.k8s.io/updated_at=2025_09_19T23_19_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=bridge-361266 minikube.k8s.io/primary=true
	I0919 23:19:02.172636  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:02.185275  594826 ops.go:34] apiserver oom_adj: -16
	I0919 23:19:02.302077  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:02.802227  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:03.302772  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:03.802680  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:04.302765  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:04.802731  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:05.303198  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:05.802145  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:06.303133  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:06.802787  594826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:06.908438  594826 kubeadm.go:1105] duration metric: took 4.736033823s to wait for elevateKubeSystemPrivileges
	I0919 23:19:06.908477  594826 kubeadm.go:394] duration metric: took 16.797878959s to StartCluster
	I0919 23:19:06.908540  594826 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:06.908612  594826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:19:06.910107  594826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:06.910351  594826 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:19:06.910635  594826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:19:06.910654  594826 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:19:06.910751  594826 addons.go:69] Setting storage-provisioner=true in profile "bridge-361266"
	I0919 23:19:06.910780  594826 addons.go:238] Setting addon storage-provisioner=true in "bridge-361266"
	I0919 23:19:06.910818  594826 host.go:66] Checking if "bridge-361266" exists ...
	I0919 23:19:06.910854  594826 config.go:182] Loaded profile config "bridge-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:19:06.910921  594826 addons.go:69] Setting default-storageclass=true in profile "bridge-361266"
	I0919 23:19:06.910945  594826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-361266"
	I0919 23:19:06.911242  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Status}}
	I0919 23:19:06.911385  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Status}}
	I0919 23:19:06.912452  594826 out.go:179] * Verifying Kubernetes components...
	I0919 23:19:06.913584  594826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:06.939735  594826 addons.go:238] Setting addon default-storageclass=true in "bridge-361266"
	I0919 23:19:06.939777  594826 host.go:66] Checking if "bridge-361266" exists ...
	I0919 23:19:06.940190  594826 cli_runner.go:164] Run: docker container inspect bridge-361266 --format={{.State.Status}}
	I0919 23:19:06.940344  594826 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:19:06.941755  594826 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:19:06.941780  594826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:19:06.941850  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:19:06.975267  594826 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:19:06.975292  594826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:19:06.975353  594826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-361266
	I0919 23:19:06.979845  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:19:06.999283  594826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/bridge-361266/id_rsa Username:docker}
	I0919 23:19:07.027780  594826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:19:07.057113  594826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:19:07.099159  594826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:19:07.118917  594826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:19:07.342287  594826 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0919 23:19:07.344100  594826 node_ready.go:35] waiting up to 15m0s for node "bridge-361266" to be "Ready" ...
	I0919 23:19:07.353216  594826 node_ready.go:49] node "bridge-361266" is "Ready"
	I0919 23:19:07.353259  594826 node_ready.go:38] duration metric: took 9.116755ms for node "bridge-361266" to be "Ready" ...
	I0919 23:19:07.353282  594826 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:19:07.353346  594826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:19:07.516456  594826 api_server.go:72] duration metric: took 606.067182ms to wait for apiserver process to appear ...
	I0919 23:19:07.516650  594826 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:19:07.516681  594826 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:19:07.524889  594826 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0919 23:19:07.526272  594826 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:19:07.526485  594826 api_server.go:141] control plane version: v1.34.0
	I0919 23:19:07.526540  594826 api_server.go:131] duration metric: took 9.876082ms to wait for apiserver health ...
	I0919 23:19:07.526551  594826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:19:07.527693  594826 addons.go:514] duration metric: took 617.039989ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:19:07.531542  594826 system_pods.go:59] 8 kube-system pods found
	I0919 23:19:07.531587  594826 system_pods.go:61] "coredns-66bc5c9577-kzv9s" [b4d89f6c-23fa-488c-b90b-37652fb2661e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:07.531602  594826 system_pods.go:61] "coredns-66bc5c9577-mfxgr" [50b578db-f1d1-44fd-adfc-f0c0f38e1c6b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:07.531621  594826 system_pods.go:61] "etcd-bridge-361266" [b30eca58-14ca-4f9c-a9ac-bc6e5ed13121] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:07.531635  594826 system_pods.go:61] "kube-apiserver-bridge-361266" [f4ed21b6-325f-4cc5-8ab4-d438fc07e7b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:07.531654  594826 system_pods.go:61] "kube-controller-manager-bridge-361266" [0ed5f592-17e2-463d-9e26-3a64e0512eb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:07.531671  594826 system_pods.go:61] "kube-proxy-gx559" [ed5d7a4a-7146-47b7-a068-941565ff4362] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:07.531680  594826 system_pods.go:61] "kube-scheduler-bridge-361266" [6de44078-a90f-4ad2-bb3a-e9fd3cf755a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:07.531685  594826 system_pods.go:61] "storage-provisioner" [b6a107e3-e4f6-4885-98fd-0d09d5706c73] Pending
	I0919 23:19:07.531696  594826 system_pods.go:74] duration metric: took 5.138596ms to wait for pod list to return data ...
	I0919 23:19:07.531710  594826 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:19:07.535618  594826 default_sa.go:45] found service account: "default"
	I0919 23:19:07.535646  594826 default_sa.go:55] duration metric: took 2.683894ms for default service account to be created ...
	I0919 23:19:07.535658  594826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:19:07.539225  594826 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:07.539263  594826 system_pods.go:89] "coredns-66bc5c9577-kzv9s" [b4d89f6c-23fa-488c-b90b-37652fb2661e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:07.539280  594826 system_pods.go:89] "coredns-66bc5c9577-mfxgr" [50b578db-f1d1-44fd-adfc-f0c0f38e1c6b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:07.539294  594826 system_pods.go:89] "etcd-bridge-361266" [b30eca58-14ca-4f9c-a9ac-bc6e5ed13121] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:07.539303  594826 system_pods.go:89] "kube-apiserver-bridge-361266" [f4ed21b6-325f-4cc5-8ab4-d438fc07e7b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:07.539312  594826 system_pods.go:89] "kube-controller-manager-bridge-361266" [0ed5f592-17e2-463d-9e26-3a64e0512eb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:07.539327  594826 system_pods.go:89] "kube-proxy-gx559" [ed5d7a4a-7146-47b7-a068-941565ff4362] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:07.539335  594826 system_pods.go:89] "kube-scheduler-bridge-361266" [6de44078-a90f-4ad2-bb3a-e9fd3cf755a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:07.539340  594826 system_pods.go:89] "storage-provisioner" [b6a107e3-e4f6-4885-98fd-0d09d5706c73] Pending
	I0919 23:19:07.539367  594826 retry.go:31] will retry after 231.744959ms: missing components: kube-dns, kube-proxy
	I0919 23:19:07.777690  594826 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:07.777737  594826 system_pods.go:89] "coredns-66bc5c9577-kzv9s" [b4d89f6c-23fa-488c-b90b-37652fb2661e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:07.777756  594826 system_pods.go:89] "coredns-66bc5c9577-mfxgr" [50b578db-f1d1-44fd-adfc-f0c0f38e1c6b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:07.777767  594826 system_pods.go:89] "etcd-bridge-361266" [b30eca58-14ca-4f9c-a9ac-bc6e5ed13121] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:07.777784  594826 system_pods.go:89] "kube-apiserver-bridge-361266" [f4ed21b6-325f-4cc5-8ab4-d438fc07e7b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:07.777802  594826 system_pods.go:89] "kube-controller-manager-bridge-361266" [0ed5f592-17e2-463d-9e26-3a64e0512eb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:07.777811  594826 system_pods.go:89] "kube-proxy-gx559" [ed5d7a4a-7146-47b7-a068-941565ff4362] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:07.777823  594826 system_pods.go:89] "kube-scheduler-bridge-361266" [6de44078-a90f-4ad2-bb3a-e9fd3cf755a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:07.777833  594826 system_pods.go:89] "storage-provisioner" [b6a107e3-e4f6-4885-98fd-0d09d5706c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:07.777858  594826 retry.go:31] will retry after 245.152301ms: missing components: kube-dns, kube-proxy
	I0919 23:19:07.849152  594826 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-361266" context rescaled to 1 replicas
	I0919 23:19:08.027832  594826 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:08.027873  594826 system_pods.go:89] "coredns-66bc5c9577-kzv9s" [b4d89f6c-23fa-488c-b90b-37652fb2661e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:08.027884  594826 system_pods.go:89] "coredns-66bc5c9577-mfxgr" [50b578db-f1d1-44fd-adfc-f0c0f38e1c6b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:08.027899  594826 system_pods.go:89] "etcd-bridge-361266" [b30eca58-14ca-4f9c-a9ac-bc6e5ed13121] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:08.027909  594826 system_pods.go:89] "kube-apiserver-bridge-361266" [f4ed21b6-325f-4cc5-8ab4-d438fc07e7b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:08.027925  594826 system_pods.go:89] "kube-controller-manager-bridge-361266" [0ed5f592-17e2-463d-9e26-3a64e0512eb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:08.027938  594826 system_pods.go:89] "kube-proxy-gx559" [ed5d7a4a-7146-47b7-a068-941565ff4362] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:08.027952  594826 system_pods.go:89] "kube-scheduler-bridge-361266" [6de44078-a90f-4ad2-bb3a-e9fd3cf755a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:08.027962  594826 system_pods.go:89] "storage-provisioner" [b6a107e3-e4f6-4885-98fd-0d09d5706c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:08.027983  594826 retry.go:31] will retry after 385.465333ms: missing components: kube-dns, kube-proxy
	I0919 23:19:08.418377  594826 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:08.418420  594826 system_pods.go:89] "coredns-66bc5c9577-kzv9s" [b4d89f6c-23fa-488c-b90b-37652fb2661e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:08.418435  594826 system_pods.go:89] "coredns-66bc5c9577-mfxgr" [50b578db-f1d1-44fd-adfc-f0c0f38e1c6b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:08.418446  594826 system_pods.go:89] "etcd-bridge-361266" [b30eca58-14ca-4f9c-a9ac-bc6e5ed13121] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:08.418456  594826 system_pods.go:89] "kube-apiserver-bridge-361266" [f4ed21b6-325f-4cc5-8ab4-d438fc07e7b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:08.418472  594826 system_pods.go:89] "kube-controller-manager-bridge-361266" [0ed5f592-17e2-463d-9e26-3a64e0512eb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:08.418480  594826 system_pods.go:89] "kube-proxy-gx559" [ed5d7a4a-7146-47b7-a068-941565ff4362] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:08.418488  594826 system_pods.go:89] "kube-scheduler-bridge-361266" [6de44078-a90f-4ad2-bb3a-e9fd3cf755a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:08.418536  594826 system_pods.go:89] "storage-provisioner" [b6a107e3-e4f6-4885-98fd-0d09d5706c73] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:08.418559  594826 retry.go:31] will retry after 523.003306ms: missing components: kube-dns, kube-proxy
	I0919 23:19:08.946424  594826 system_pods.go:86] 7 kube-system pods found
	I0919 23:19:08.946463  594826 system_pods.go:89] "coredns-66bc5c9577-mfxgr" [50b578db-f1d1-44fd-adfc-f0c0f38e1c6b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:08.946473  594826 system_pods.go:89] "etcd-bridge-361266" [b30eca58-14ca-4f9c-a9ac-bc6e5ed13121] Running
	I0919 23:19:08.946483  594826 system_pods.go:89] "kube-apiserver-bridge-361266" [f4ed21b6-325f-4cc5-8ab4-d438fc07e7b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:08.946493  594826 system_pods.go:89] "kube-controller-manager-bridge-361266" [0ed5f592-17e2-463d-9e26-3a64e0512eb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:08.946523  594826 system_pods.go:89] "kube-proxy-gx559" [ed5d7a4a-7146-47b7-a068-941565ff4362] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:08.946533  594826 system_pods.go:89] "kube-scheduler-bridge-361266" [6de44078-a90f-4ad2-bb3a-e9fd3cf755a9] Running
	I0919 23:19:08.946539  594826 system_pods.go:89] "storage-provisioner" [b6a107e3-e4f6-4885-98fd-0d09d5706c73] Running
	I0919 23:19:08.946552  594826 system_pods.go:126] duration metric: took 1.410885939s to wait for k8s-apps to be running ...
	I0919 23:19:08.946566  594826 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:19:08.946620  594826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:19:08.960747  594826 system_svc.go:56] duration metric: took 14.164846ms WaitForService to wait for kubelet
	I0919 23:19:08.960780  594826 kubeadm.go:578] duration metric: took 2.050399398s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:19:08.960801  594826 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:19:08.963969  594826 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:19:08.964001  594826 node_conditions.go:123] node cpu capacity is 8
	I0919 23:19:08.964020  594826 node_conditions.go:105] duration metric: took 3.214437ms to run NodePressure ...
	I0919 23:19:08.964037  594826 start.go:241] waiting for startup goroutines ...
	I0919 23:19:08.964051  594826 start.go:246] waiting for cluster config update ...
	I0919 23:19:08.964068  594826 start.go:255] writing updated cluster config ...
	I0919 23:19:08.964344  594826 ssh_runner.go:195] Run: rm -f paused
	I0919 23:19:08.968581  594826 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:19:08.972949  594826 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mfxgr" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:19:10.979620  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:12.980236  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:15.480376  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:17.979215  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:19.979381  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:22.479765  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:24.980970  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:27.479557  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:29.979343  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:31.979985  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:34.479126  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:36.981058  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:39.479573  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:41.479997  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:43.978688  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:45.979077  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:47.979144  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:49.979200  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:51.979627  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:53.979914  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:55.980213  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:19:58.479010  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:00.479532  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:02.979423  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:05.478946  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:07.482662  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:09.979247  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:12.480011  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:14.979031  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:16.979864  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:19.478903  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:21.479126  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:23.979742  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:25.980563  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:28.478216  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:30.478849  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:32.479457  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:34.979573  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:36.980143  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:39.480069  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:41.978975  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:43.987776  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:46.479045  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:48.479209  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:50.479774  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:52.480349  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:54.978881  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:56.979009  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:20:59.479317  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:01.480092  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:03.978927  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:05.979277  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:08.478853  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:10.479617  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:12.481849  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:14.979333  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:17.479198  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:19.978065  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:21.980342  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:24.479475  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:26.979271  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:29.479460  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:31.479981  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:33.979032  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:36.480283  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:38.978670  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:40.978878  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:43.478628  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:45.478787  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:47.478848  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:49.979206  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:52.479252  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:54.978783  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:56.979228  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:21:59.478375  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:01.478413  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:03.478457  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:05.478759  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:07.478900  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:09.479037  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:11.481325  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:13.979973  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:16.480365  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:18.978966  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:20.979129  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:22.979177  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:24.979739  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:27.479420  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:29.978842  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:31.979408  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:34.478887  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:36.479391  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:38.978053  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:40.978307  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:42.978861  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:45.478807  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:47.978542  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:49.978777  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:52.479960  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:54.978901  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:57.478071  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:22:59.478316  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:23:01.478774  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:23:03.978211  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:23:05.978936  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	W0919 23:23:07.979179  594826 pod_ready.go:104] pod "coredns-66bc5c9577-mfxgr" is not "Ready", error: <nil>
	I0919 23:23:08.969486  594826 pod_ready.go:86] duration metric: took 3m59.996498589s for pod "coredns-66bc5c9577-mfxgr" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:23:08.969565  594826 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 23:23:08.969585  594826 pod_ready.go:40] duration metric: took 4m0.00096382s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:23:08.971245  594826 out.go:203] 
	W0919 23:23:08.972282  594826 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 23:23:08.973278  594826 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (273.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (275.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubenet-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: exit status 80 (4m35.604738326s)

                                                
                                                
-- stdout --
	* [kubenet-361266] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "kubenet-361266" primary control-plane node in "kubenet-361266" cluster
	* Pulling base image v0.0.48 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:18:39.199733  596156 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:18:39.199900  596156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:18:39.199908  596156 out.go:374] Setting ErrFile to fd 2...
	I0919 23:18:39.199913  596156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:18:39.200252  596156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:18:39.200853  596156 out.go:368] Setting JSON to false
	I0919 23:18:39.202429  596156 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7255,"bootTime":1758316664,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:18:39.202531  596156 start.go:140] virtualization: kvm guest
	I0919 23:18:39.205018  596156 out.go:179] * [kubenet-361266] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:18:39.206413  596156 notify.go:220] Checking for updates...
	I0919 23:18:39.206466  596156 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:18:39.207653  596156 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:18:39.208863  596156 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:18:39.210097  596156 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 23:18:39.211314  596156 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:18:39.212469  596156 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:18:39.214475  596156 config.go:182] Loaded profile config "bridge-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:39.214648  596156 config.go:182] Loaded profile config "enable-default-cni-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:39.214837  596156 config.go:182] Loaded profile config "flannel-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:39.215026  596156 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:18:39.247910  596156 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:18:39.247997  596156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:18:39.325251  596156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:84 SystemTime:2025-09-19 23:18:39.312983526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:18:39.325404  596156 docker.go:318] overlay module found
	I0919 23:18:39.350271  596156 out.go:179] * Using the docker driver based on user configuration
	I0919 23:18:39.413909  596156 start.go:304] selected driver: docker
	I0919 23:18:39.413937  596156 start.go:918] validating driver "docker" against <nil>
	I0919 23:18:39.413956  596156 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:18:39.414763  596156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:18:39.480454  596156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:84 SystemTime:2025-09-19 23:18:39.467199176 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:18:39.480756  596156 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:18:39.481068  596156 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:18:39.538901  596156 out.go:179] * Using Docker driver with root privileges
	I0919 23:18:39.540441  596156 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0919 23:18:39.540705  596156 start.go:348] cluster config:
	{Name:kubenet-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0919 23:18:39.542308  596156 out.go:179] * Starting "kubenet-361266" primary control-plane node in "kubenet-361266" cluster
	I0919 23:18:39.543587  596156 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 23:18:39.548692  596156 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:18:39.550196  596156 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:18:39.550255  596156 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 23:18:39.550270  596156 cache.go:58] Caching tarball of preloaded images
	I0919 23:18:39.550328  596156 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:18:39.550401  596156 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:18:39.550424  596156 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 23:18:39.550585  596156 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/config.json ...
	I0919 23:18:39.550611  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/config.json: {Name:mkbc974d91b6a097b11f5278cefe0daa03ee2992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:39.575520  596156 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:18:39.575548  596156 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:18:39.575569  596156 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:18:39.575605  596156 start.go:360] acquireMachinesLock for kubenet-361266: {Name:mkeeb1763f367b0afcc57e4d127408133ac49205 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:18:39.575728  596156 start.go:364] duration metric: took 96.346µs to acquireMachinesLock for "kubenet-361266"
	I0919 23:18:39.575764  596156 start.go:93] Provisioning new machine with config: &{Name:kubenet-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:18:39.575878  596156 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:18:39.577939  596156 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:18:39.578246  596156 start.go:159] libmachine.API.Create for "kubenet-361266" (driver="docker")
	I0919 23:18:39.578289  596156 client.go:168] LocalClient.Create starting
	I0919 23:18:39.578357  596156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 23:18:39.578397  596156 main.go:141] libmachine: Decoding PEM data...
	I0919 23:18:39.578417  596156 main.go:141] libmachine: Parsing certificate...
	I0919 23:18:39.578493  596156 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 23:18:39.578543  596156 main.go:141] libmachine: Decoding PEM data...
	I0919 23:18:39.578561  596156 main.go:141] libmachine: Parsing certificate...
	I0919 23:18:39.579008  596156 cli_runner.go:164] Run: docker network inspect kubenet-361266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:18:39.598808  596156 cli_runner.go:211] docker network inspect kubenet-361266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:18:39.598897  596156 network_create.go:284] running [docker network inspect kubenet-361266] to gather additional debugging logs...
	I0919 23:18:39.598926  596156 cli_runner.go:164] Run: docker network inspect kubenet-361266
	W0919 23:18:39.619724  596156 cli_runner.go:211] docker network inspect kubenet-361266 returned with exit code 1
	I0919 23:18:39.619752  596156 network_create.go:287] error running [docker network inspect kubenet-361266]: docker network inspect kubenet-361266: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubenet-361266 not found
	I0919 23:18:39.619769  596156 network_create.go:289] output of [docker network inspect kubenet-361266]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubenet-361266 not found
	
	** /stderr **
	I0919 23:18:39.619888  596156 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:18:39.643806  596156 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db7021220859 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:a3:92:23:56:8a} reservation:<nil>}
	I0919 23:18:39.644599  596156 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-683ec4c6685e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:9d:60:92:e5:85} reservation:<nil>}
	I0919 23:18:39.645360  596156 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b9a40fa74e58 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:8a:56:fb:db:9d} reservation:<nil>}
	I0919 23:18:39.646035  596156 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c04692c8d5c2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:5a:82:94:29:f8} reservation:<nil>}
	I0919 23:18:39.646898  596156 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce9660}
	I0919 23:18:39.646922  596156 network_create.go:124] attempt to create docker network kubenet-361266 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0919 23:18:39.646980  596156 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-361266 kubenet-361266
	I0919 23:18:39.714415  596156 network_create.go:108] docker network kubenet-361266 192.168.85.0/24 created
	I0919 23:18:39.714446  596156 kic.go:121] calculated static IP "192.168.85.2" for the "kubenet-361266" container
	I0919 23:18:39.714536  596156 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:18:39.737347  596156 cli_runner.go:164] Run: docker volume create kubenet-361266 --label name.minikube.sigs.k8s.io=kubenet-361266 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:18:39.759017  596156 oci.go:103] Successfully created a docker volume kubenet-361266
	I0919 23:18:39.759156  596156 cli_runner.go:164] Run: docker run --rm --name kubenet-361266-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-361266 --entrypoint /usr/bin/test -v kubenet-361266:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:18:41.813964  596156 cli_runner.go:217] Completed: docker run --rm --name kubenet-361266-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-361266 --entrypoint /usr/bin/test -v kubenet-361266:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (2.054742431s)
	I0919 23:18:41.814004  596156 oci.go:107] Successfully prepared a docker volume kubenet-361266
	I0919 23:18:41.814065  596156 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:18:41.814102  596156 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:18:41.814197  596156 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-361266:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:18:45.036324  596156 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-361266:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.222032015s)
	I0919 23:18:45.036380  596156 kic.go:203] duration metric: took 3.222269723s to extract preloaded images to volume ...
	W0919 23:18:45.036553  596156 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:18:45.036602  596156 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:18:45.036653  596156 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:18:45.099465  596156 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-361266 --name kubenet-361266 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-361266 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-361266 --network kubenet-361266 --ip 192.168.85.2 --volume kubenet-361266:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:18:45.550474  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Running}}
	I0919 23:18:45.571355  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Status}}
	I0919 23:18:45.590981  596156 cli_runner.go:164] Run: docker exec kubenet-361266 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:18:45.647143  596156 oci.go:144] the created container "kubenet-361266" has a running status.
	I0919 23:18:45.647181  596156 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa...
	I0919 23:18:46.399120  596156 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:18:46.428260  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Status}}
	I0919 23:18:46.447817  596156 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:18:46.447845  596156 kic_runner.go:114] Args: [docker exec --privileged kubenet-361266 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:18:46.505142  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Status}}
	I0919 23:18:46.526784  596156 machine.go:93] provisionDockerMachine start ...
	I0919 23:18:46.526865  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:46.547570  596156 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:46.547912  596156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I0919 23:18:46.547937  596156 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:18:46.689550  596156 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-361266
	
	I0919 23:18:46.689584  596156 ubuntu.go:182] provisioning hostname "kubenet-361266"
	I0919 23:18:46.689654  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:46.713787  596156 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:46.713994  596156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I0919 23:18:46.714006  596156 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubenet-361266 && echo "kubenet-361266" | sudo tee /etc/hostname
	I0919 23:18:46.874377  596156 main.go:141] libmachine: SSH cmd err, output: <nil>: kubenet-361266
	
	I0919 23:18:46.874459  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:46.897049  596156 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:46.897358  596156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I0919 23:18:46.897392  596156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-361266' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-361266/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-361266' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:18:47.037436  596156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:18:47.037475  596156 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:18:47.037510  596156 ubuntu.go:190] setting up certificates
	I0919 23:18:47.037524  596156 provision.go:84] configureAuth start
	I0919 23:18:47.037588  596156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-361266
	I0919 23:18:47.057309  596156 provision.go:143] copyHostCerts
	I0919 23:18:47.057376  596156 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:18:47.057389  596156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:18:47.057458  596156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:18:47.057589  596156 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:18:47.057601  596156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:18:47.057630  596156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:18:47.057691  596156 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:18:47.057699  596156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:18:47.057721  596156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:18:47.057772  596156 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.kubenet-361266 san=[127.0.0.1 192.168.85.2 kubenet-361266 localhost minikube]
	I0919 23:18:47.595580  596156 provision.go:177] copyRemoteCerts
	I0919 23:18:47.595647  596156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:18:47.595685  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:47.616036  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:18:47.717610  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:18:47.749205  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0919 23:18:47.778373  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:18:47.807524  596156 provision.go:87] duration metric: took 769.984105ms to configureAuth
	I0919 23:18:47.807558  596156 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:18:47.807778  596156 config.go:182] Loaded profile config "kubenet-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:18:47.807868  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:47.829077  596156 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:47.829401  596156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I0919 23:18:47.829424  596156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:18:47.968654  596156 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:18:47.968682  596156 ubuntu.go:71] root file system type: overlay
	I0919 23:18:47.968814  596156 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:18:47.968884  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:47.988800  596156 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:47.989119  596156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I0919 23:18:47.989234  596156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:18:48.148057  596156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:18:48.148172  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:48.167599  596156 main.go:141] libmachine: Using SSH client type: native
	I0919 23:18:48.167912  596156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I0919 23:18:48.167942  596156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:18:49.399900  596156 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 23:18:48.144584971 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 23:18:49.399943  596156 machine.go:96] duration metric: took 2.873134107s to provisionDockerMachine
	I0919 23:18:49.399962  596156 client.go:171] duration metric: took 9.821665444s to LocalClient.Create
	I0919 23:18:49.399993  596156 start.go:167] duration metric: took 9.821750245s to libmachine.API.Create "kubenet-361266"
	I0919 23:18:49.400009  596156 start.go:293] postStartSetup for "kubenet-361266" (driver="docker")
	I0919 23:18:49.400021  596156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:18:49.400091  596156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:18:49.400145  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:49.418421  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:18:49.518964  596156 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:18:49.522686  596156 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:18:49.522727  596156 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:18:49.522742  596156 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:18:49.522751  596156 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:18:49.522767  596156 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:18:49.522832  596156 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:18:49.522945  596156 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:18:49.523079  596156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:18:49.532740  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:18:49.562657  596156 start.go:296] duration metric: took 162.629576ms for postStartSetup
	I0919 23:18:49.563084  596156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-361266
	I0919 23:18:49.582029  596156 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/config.json ...
	I0919 23:18:49.582298  596156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:18:49.582341  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:49.601117  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:18:49.695105  596156 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:18:49.700340  596156 start.go:128] duration metric: took 10.124442929s to createHost
	I0919 23:18:49.700368  596156 start.go:83] releasing machines lock for "kubenet-361266", held for 10.124622851s
	I0919 23:18:49.700436  596156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-361266
	I0919 23:18:49.720114  596156 ssh_runner.go:195] Run: cat /version.json
	I0919 23:18:49.720167  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:49.720198  596156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:18:49.720277  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:18:49.740651  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:18:49.742062  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:18:49.911152  596156 ssh_runner.go:195] Run: systemctl --version
	I0919 23:18:49.916083  596156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:18:49.921118  596156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:18:49.955231  596156 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:18:49.955313  596156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:18:49.998685  596156 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:18:49.998850  596156 start.go:495] detecting cgroup driver to use...
	I0919 23:18:49.998901  596156 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:18:49.999032  596156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:18:50.024384  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:18:50.038295  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:18:50.050218  596156 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:18:50.050292  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:18:50.062124  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:18:50.073951  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:18:50.085619  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:18:50.098662  596156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:18:50.109796  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:18:50.122762  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:18:50.135832  596156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:18:50.147885  596156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:18:50.157810  596156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:18:50.167892  596156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:50.245753  596156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:18:50.343371  596156 start.go:495] detecting cgroup driver to use...
	I0919 23:18:50.343478  596156 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:18:50.343568  596156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:18:50.358133  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:18:50.371693  596156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:18:50.395977  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:18:50.410964  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:18:50.425888  596156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:18:50.445673  596156 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:18:50.449866  596156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:18:50.461673  596156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (196 bytes)
	I0919 23:18:50.482845  596156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:18:50.569717  596156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:18:50.644917  596156 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:18:50.645054  596156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:18:50.666744  596156 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:18:50.680062  596156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:50.774721  596156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:18:51.607002  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:18:51.620679  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:18:51.635861  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:18:51.650631  596156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:18:51.729402  596156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:18:51.809913  596156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:51.883252  596156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:18:51.913410  596156 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:18:51.926421  596156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:51.999941  596156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:18:52.080246  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:18:52.095135  596156 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:18:52.095194  596156 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:18:52.099646  596156 start.go:563] Will wait 60s for crictl version
	I0919 23:18:52.099703  596156 ssh_runner.go:195] Run: which crictl
	I0919 23:18:52.103560  596156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:18:52.141802  596156 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:18:52.141871  596156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:18:52.172551  596156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:18:52.209320  596156 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:18:52.209430  596156 cli_runner.go:164] Run: docker network inspect kubenet-361266 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:18:52.235072  596156 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0919 23:18:52.240309  596156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:18:52.255867  596156 kubeadm.go:875] updating cluster {Name:kubenet-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:18:52.256020  596156 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:18:52.256093  596156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:18:52.279639  596156 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:18:52.279668  596156 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:18:52.279735  596156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:18:52.302556  596156 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:18:52.302581  596156 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:18:52.302594  596156 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 docker true true} ...
	I0919 23:18:52.302702  596156 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubenet-361266 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --pod-cidr=10.244.0.0/16
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:kubenet-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:18:52.302760  596156 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:18:52.364937  596156 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0919 23:18:52.364973  596156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:18:52.365003  596156 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-361266 NodeName:kubenet-361266 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:18:52.365171  596156 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubenet-361266"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:18:52.365239  596156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:18:52.375992  596156 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:18:52.376074  596156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:18:52.386532  596156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (338 bytes)
	I0919 23:18:52.407244  596156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:18:52.427833  596156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0919 23:18:52.449740  596156 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:18:52.454400  596156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:18:52.468745  596156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:18:52.542433  596156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:18:52.565706  596156 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266 for IP: 192.168.85.2
	I0919 23:18:52.565730  596156 certs.go:194] generating shared ca certs ...
	I0919 23:18:52.565754  596156 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:52.565929  596156 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:18:52.565985  596156 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:18:52.565999  596156 certs.go:256] generating profile certs ...
	I0919 23:18:52.566080  596156 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/client.key
	I0919 23:18:52.566093  596156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/client.crt with IP's: []
	I0919 23:18:53.122910  596156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/client.crt ...
	I0919 23:18:53.122949  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/client.crt: {Name:mk4d4dd3f08c318179c6ab048730c205617c31a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:53.123165  596156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/client.key ...
	I0919 23:18:53.123174  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/client.key: {Name:mk42c53b14147768aec329f3303b868bc6edfff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:53.123283  596156 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.key.731823d2
	I0919 23:18:53.123299  596156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.crt.731823d2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0919 23:18:53.818317  596156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.crt.731823d2 ...
	I0919 23:18:53.818352  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.crt.731823d2: {Name:mk9cadf5fbfd8004aef96a8e4f2fd2e30e861336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:53.818592  596156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.key.731823d2 ...
	I0919 23:18:53.818611  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.key.731823d2: {Name:mk6588ae1d756599b13fa8ab256b2d1e791590a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:53.818718  596156 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.crt.731823d2 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.crt
	I0919 23:18:53.818849  596156 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.key.731823d2 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.key
	I0919 23:18:53.818934  596156 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.key
	I0919 23:18:53.818957  596156 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.crt with IP's: []
	I0919 23:18:53.940079  596156 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.crt ...
	I0919 23:18:53.940120  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.crt: {Name:mk367a8519ec9e78064ef52b96b0222e1696bee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:53.940362  596156 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.key ...
	I0919 23:18:53.940397  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.key: {Name:mk908a343ad75f3faa4b728754136e4d96e7b3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:18:53.940688  596156 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:18:53.940765  596156 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:18:53.940787  596156 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:18:53.940830  596156 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:18:53.940876  596156 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:18:53.940917  596156 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:18:53.940982  596156 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:18:53.941874  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:18:53.979573  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:18:54.015818  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:18:54.053278  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:18:54.087912  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 23:18:54.122450  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:18:54.156321  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:18:54.190905  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubenet-361266/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:18:54.225485  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:18:54.265352  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:18:54.300549  596156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:18:54.335319  596156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:18:54.360905  596156 ssh_runner.go:195] Run: openssl version
	I0919 23:18:54.368909  596156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:18:54.383284  596156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:18:54.388620  596156 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:18:54.388697  596156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:18:54.398208  596156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:18:54.411209  596156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:18:54.422682  596156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:18:54.427886  596156 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:18:54.427968  596156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:18:54.436906  596156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:18:54.450737  596156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:18:54.464023  596156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:18:54.469825  596156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:18:54.469896  596156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:18:54.479713  596156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:18:54.494285  596156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:18:54.500028  596156 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:18:54.500159  596156 kubeadm.go:392] StartCluster: {Name:kubenet-361266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubenet-361266 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:18:54.500366  596156 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:18:54.528182  596156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:18:54.540756  596156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:18:54.551856  596156 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:18:54.551931  596156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:18:54.564740  596156 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:18:54.564763  596156 kubeadm.go:157] found existing configuration files:
	
	I0919 23:18:54.564827  596156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:18:54.577904  596156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:18:54.577976  596156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:18:54.593719  596156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:18:54.605021  596156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:18:54.605091  596156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:18:54.615974  596156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:18:54.627186  596156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:18:54.627254  596156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:18:54.638088  596156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:18:54.650699  596156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:18:54.650786  596156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:18:54.661884  596156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:18:54.743754  596156 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:18:54.814628  596156 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:19:07.569739  596156 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:19:07.570377  596156 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:19:07.570524  596156 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:19:07.570623  596156 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:19:07.570674  596156 kubeadm.go:310] OS: Linux
	I0919 23:19:07.570726  596156 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:19:07.570780  596156 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:19:07.570858  596156 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:19:07.571088  596156 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:19:07.571158  596156 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:19:07.571406  596156 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:19:07.571670  596156 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:19:07.571774  596156 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:19:07.571880  596156 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:19:07.572009  596156 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:19:07.572141  596156 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:19:07.572241  596156 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:19:07.575103  596156 out.go:252]   - Generating certificates and keys ...
	I0919 23:19:07.576052  596156 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:19:07.576152  596156 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:19:07.576269  596156 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:19:07.576346  596156 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:19:07.576430  596156 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:19:07.576529  596156 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:19:07.576600  596156 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:19:07.576762  596156 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubenet-361266 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:19:07.576840  596156 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:19:07.577005  596156 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubenet-361266 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:19:07.577100  596156 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:19:07.577200  596156 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:19:07.577276  596156 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:19:07.577378  596156 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:19:07.577461  596156 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:19:07.577577  596156 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:19:07.577669  596156 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:19:07.577774  596156 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:19:07.577846  596156 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:19:07.577950  596156 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:19:07.578053  596156 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:19:07.580132  596156 out.go:252]   - Booting up control plane ...
	I0919 23:19:07.580252  596156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:19:07.580383  596156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:19:07.580492  596156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:19:07.580649  596156 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:19:07.580795  596156 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:19:07.580951  596156 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:19:07.581082  596156 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:19:07.581143  596156 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:19:07.581336  596156 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:19:07.581487  596156 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:19:07.581578  596156 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002583258s
	I0919 23:19:07.581685  596156 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:19:07.581814  596156 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0919 23:19:07.581919  596156 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:19:07.582009  596156 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:19:07.582098  596156 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.259404974s
	I0919 23:19:07.582199  596156 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.192030288s
	I0919 23:19:07.582307  596156 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.001646168s
	I0919 23:19:07.582463  596156 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:19:07.582685  596156 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:19:07.582786  596156 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:19:07.583056  596156 kubeadm.go:310] [mark-control-plane] Marking the node kubenet-361266 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:19:07.583133  596156 kubeadm.go:310] [bootstrap-token] Using token: orubou.50sgysrq4yvhnpqn
	I0919 23:19:07.585222  596156 out.go:252]   - Configuring RBAC rules ...
	I0919 23:19:07.585370  596156 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:19:07.585528  596156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:19:07.585713  596156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:19:07.585884  596156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:19:07.586048  596156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:19:07.586165  596156 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:19:07.586323  596156 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:19:07.586379  596156 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:19:07.586439  596156 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:19:07.586444  596156 kubeadm.go:310] 
	I0919 23:19:07.586512  596156 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:19:07.586519  596156 kubeadm.go:310] 
	I0919 23:19:07.586604  596156 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:19:07.586609  596156 kubeadm.go:310] 
	I0919 23:19:07.586655  596156 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:19:07.586742  596156 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:19:07.586829  596156 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:19:07.586842  596156 kubeadm.go:310] 
	I0919 23:19:07.586902  596156 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:19:07.586914  596156 kubeadm.go:310] 
	I0919 23:19:07.586976  596156 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:19:07.586987  596156 kubeadm.go:310] 
	I0919 23:19:07.587058  596156 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:19:07.587163  596156 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:19:07.587269  596156 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:19:07.587281  596156 kubeadm.go:310] 
	I0919 23:19:07.587412  596156 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:19:07.587581  596156 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:19:07.587595  596156 kubeadm.go:310] 
	I0919 23:19:07.587702  596156 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token orubou.50sgysrq4yvhnpqn \
	I0919 23:19:07.587840  596156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 23:19:07.587866  596156 kubeadm.go:310] 	--control-plane 
	I0919 23:19:07.587875  596156 kubeadm.go:310] 
	I0919 23:19:07.588012  596156 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:19:07.588022  596156 kubeadm.go:310] 
	I0919 23:19:07.588119  596156 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token orubou.50sgysrq4yvhnpqn \
	I0919 23:19:07.588250  596156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 23:19:07.588265  596156 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0919 23:19:07.588290  596156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:19:07.588388  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubenet-361266 minikube.k8s.io/updated_at=2025_09_19T23_19_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=kubenet-361266 minikube.k8s.io/primary=true
	I0919 23:19:07.588392  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:07.602044  596156 ops.go:34] apiserver oom_adj: -16
	I0919 23:19:07.708058  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:08.208727  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:08.709169  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:09.208991  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:09.709018  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:10.208748  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:10.708213  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:11.208713  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:11.708741  596156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:11.794221  596156 kubeadm.go:1105] duration metric: took 4.205902225s to wait for elevateKubeSystemPrivileges
	I0919 23:19:11.794260  596156 kubeadm.go:394] duration metric: took 17.294108922s to StartCluster
	I0919 23:19:11.794284  596156 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:11.794361  596156 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:19:11.796436  596156 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:11.796777  596156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:19:11.796773  596156 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:19:11.796863  596156 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:19:11.796965  596156 addons.go:69] Setting storage-provisioner=true in profile "kubenet-361266"
	I0919 23:19:11.796990  596156 config.go:182] Loaded profile config "kubenet-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:19:11.797042  596156 addons.go:69] Setting default-storageclass=true in profile "kubenet-361266"
	I0919 23:19:11.797064  596156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-361266"
	I0919 23:19:11.797067  596156 addons.go:238] Setting addon storage-provisioner=true in "kubenet-361266"
	I0919 23:19:11.797190  596156 host.go:66] Checking if "kubenet-361266" exists ...
	I0919 23:19:11.797474  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Status}}
	I0919 23:19:11.797777  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Status}}
	I0919 23:19:11.799587  596156 out.go:179] * Verifying Kubernetes components...
	I0919 23:19:11.800909  596156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:11.830221  596156 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:19:11.831756  596156 addons.go:238] Setting addon default-storageclass=true in "kubenet-361266"
	I0919 23:19:11.831820  596156 host.go:66] Checking if "kubenet-361266" exists ...
	I0919 23:19:11.834642  596156 cli_runner.go:164] Run: docker container inspect kubenet-361266 --format={{.State.Status}}
	I0919 23:19:11.835096  596156 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:19:11.835132  596156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:19:11.835185  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:19:11.870163  596156 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:19:11.870191  596156 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:19:11.870257  596156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-361266
	I0919 23:19:11.874410  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:19:11.900662  596156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/kubenet-361266/id_rsa Username:docker}
	I0919 23:19:11.933382  596156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:19:11.965059  596156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:19:12.016683  596156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:19:12.033085  596156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:19:12.211534  596156 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0919 23:19:12.213534  596156 node_ready.go:35] waiting up to 15m0s for node "kubenet-361266" to be "Ready" ...
	I0919 23:19:12.223217  596156 node_ready.go:49] node "kubenet-361266" is "Ready"
	I0919 23:19:12.223254  596156 node_ready.go:38] duration metric: took 9.670272ms for node "kubenet-361266" to be "Ready" ...
	I0919 23:19:12.223277  596156 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:19:12.223338  596156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:19:12.472492  596156 api_server.go:72] duration metric: took 675.674299ms to wait for apiserver process to appear ...
	I0919 23:19:12.472541  596156 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:19:12.472563  596156 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0919 23:19:12.482994  596156 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0919 23:19:12.486331  596156 api_server.go:141] control plane version: v1.34.0
	I0919 23:19:12.486371  596156 api_server.go:131] duration metric: took 13.82172ms to wait for apiserver health ...
	I0919 23:19:12.486383  596156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:19:12.489276  596156 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:19:12.490572  596156 addons.go:514] duration metric: took 693.707398ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:19:12.496899  596156 system_pods.go:59] 8 kube-system pods found
	I0919 23:19:12.496939  596156 system_pods.go:61] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Pending
	I0919 23:19:12.496953  596156 system_pods.go:61] "coredns-66bc5c9577-vjgz2" [1ee4ee71-b38d-4951-9f52-13209f053702] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:12.496963  596156 system_pods.go:61] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:12.496980  596156 system_pods.go:61] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:12.496992  596156 system_pods.go:61] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:12.497007  596156 system_pods.go:61] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:12.497014  596156 system_pods.go:61] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:12.497019  596156 system_pods.go:61] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Pending
	I0919 23:19:12.497028  596156 system_pods.go:74] duration metric: took 10.637208ms to wait for pod list to return data ...
	I0919 23:19:12.497039  596156 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:19:12.499779  596156 default_sa.go:45] found service account: "default"
	I0919 23:19:12.499807  596156 default_sa.go:55] duration metric: took 2.760602ms for default service account to be created ...
	I0919 23:19:12.499830  596156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:19:12.503784  596156 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:12.503820  596156 system_pods.go:89] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Pending
	I0919 23:19:12.503831  596156 system_pods.go:89] "coredns-66bc5c9577-vjgz2" [1ee4ee71-b38d-4951-9f52-13209f053702] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:12.503838  596156 system_pods.go:89] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:12.503849  596156 system_pods.go:89] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:12.503859  596156 system_pods.go:89] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:12.503868  596156 system_pods.go:89] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:12.503881  596156 system_pods.go:89] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:12.503894  596156 system_pods.go:89] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Pending
	I0919 23:19:12.503925  596156 retry.go:31] will retry after 238.371378ms: missing components: kube-dns, kube-proxy
	I0919 23:19:12.717433  596156 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kubenet-361266" context rescaled to 1 replicas
	I0919 23:19:12.748783  596156 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:12.748836  596156 system_pods.go:89] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:12.748849  596156 system_pods.go:89] "coredns-66bc5c9577-vjgz2" [1ee4ee71-b38d-4951-9f52-13209f053702] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:12.748861  596156 system_pods.go:89] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:12.748872  596156 system_pods.go:89] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:12.748884  596156 system_pods.go:89] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:12.748907  596156 system_pods.go:89] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:12.748916  596156 system_pods.go:89] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:12.748926  596156 system_pods.go:89] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:12.748950  596156 retry.go:31] will retry after 297.320949ms: missing components: kube-dns, kube-proxy
	I0919 23:19:13.051523  596156 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:13.051565  596156 system_pods.go:89] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:13.051577  596156 system_pods.go:89] "coredns-66bc5c9577-vjgz2" [1ee4ee71-b38d-4951-9f52-13209f053702] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:13.051588  596156 system_pods.go:89] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:13.051598  596156 system_pods.go:89] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:13.051608  596156 system_pods.go:89] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:13.051617  596156 system_pods.go:89] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:13.051625  596156 system_pods.go:89] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:13.051637  596156 system_pods.go:89] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:13.051660  596156 retry.go:31] will retry after 471.679185ms: missing components: kube-dns, kube-proxy
	I0919 23:19:13.531673  596156 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:13.531714  596156 system_pods.go:89] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:13.531721  596156 system_pods.go:89] "coredns-66bc5c9577-vjgz2" [1ee4ee71-b38d-4951-9f52-13209f053702] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:13.531728  596156 system_pods.go:89] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:13.531734  596156 system_pods.go:89] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:13.531740  596156 system_pods.go:89] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:13.531749  596156 system_pods.go:89] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:13.531756  596156 system_pods.go:89] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:13.531764  596156 system_pods.go:89] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:13.531786  596156 retry.go:31] will retry after 427.135767ms: missing components: kube-dns, kube-proxy
	I0919 23:19:13.963734  596156 system_pods.go:86] 8 kube-system pods found
	I0919 23:19:13.963777  596156 system_pods.go:89] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:13.963788  596156 system_pods.go:89] "coredns-66bc5c9577-vjgz2" [1ee4ee71-b38d-4951-9f52-13209f053702] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:13.963798  596156 system_pods.go:89] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:13.963810  596156 system_pods.go:89] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:13.963819  596156 system_pods.go:89] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:13.963829  596156 system_pods.go:89] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:13.963836  596156 system_pods.go:89] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:13.963846  596156 system_pods.go:89] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:19:13.963866  596156 retry.go:31] will retry after 741.014116ms: missing components: kube-dns, kube-proxy
	I0919 23:19:14.709706  596156 system_pods.go:86] 7 kube-system pods found
	I0919 23:19:14.709753  596156 system_pods.go:89] "coredns-66bc5c9577-qlvf4" [01931126-db4b-4660-aa7f-62f2f93854e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:19:14.709767  596156 system_pods.go:89] "etcd-kubenet-361266" [54fdb027-518b-4845-ac48-f46cff388478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:19:14.709776  596156 system_pods.go:89] "kube-apiserver-kubenet-361266" [f6bcb251-cf1e-4459-88ed-ce65172314cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:19:14.709786  596156 system_pods.go:89] "kube-controller-manager-kubenet-361266" [4eaad3be-c3b6-49ad-9df4-19cda6fe0011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:19:14.709795  596156 system_pods.go:89] "kube-proxy-m8jph" [a79bd6be-dd17-4635-8a48-1f8c364f8893] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:19:14.709808  596156 system_pods.go:89] "kube-scheduler-kubenet-361266" [1bea1bb2-27e8-4635-a34e-0e2c7cca9be2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:19:14.709814  596156 system_pods.go:89] "storage-provisioner" [a4e6bf4e-7f61-4f48-8283-371ede47630f] Running
	I0919 23:19:14.709828  596156 system_pods.go:126] duration metric: took 2.20998951s to wait for k8s-apps to be running ...
	I0919 23:19:14.709843  596156 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:19:14.709905  596156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:19:14.725722  596156 system_svc.go:56] duration metric: took 15.857955ms WaitForService to wait for kubelet
	I0919 23:19:14.725760  596156 kubeadm.go:578] duration metric: took 2.928947561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:19:14.725786  596156 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:19:14.729433  596156 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:19:14.729462  596156 node_conditions.go:123] node cpu capacity is 8
	I0919 23:19:14.729533  596156 node_conditions.go:105] duration metric: took 3.737436ms to run NodePressure ...
	I0919 23:19:14.729556  596156 start.go:241] waiting for startup goroutines ...
	I0919 23:19:14.729569  596156 start.go:246] waiting for cluster config update ...
	I0919 23:19:14.729587  596156 start.go:255] writing updated cluster config ...
	I0919 23:19:14.729932  596156 ssh_runner.go:195] Run: rm -f paused
	I0919 23:19:14.734749  596156 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:19:14.738997  596156 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qlvf4" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:19:16.746482  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:19.245049  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:21.245736  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:23.745205  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:26.245681  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:28.744757  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:30.744983  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:32.745296  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:35.245101  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:37.246162  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:39.744555  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:41.745212  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:44.245152  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:46.745638  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:48.746052  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:51.245263  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:53.245304  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:55.245452  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:19:57.745234  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:00.246293  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:02.744752  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:04.745284  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:07.244899  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:09.744718  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:11.745198  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:14.244980  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:16.745216  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:19.245298  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:21.744360  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:23.745094  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:25.745590  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:28.244536  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:30.245058  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:32.745057  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:35.245234  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:37.245569  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:39.745076  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:41.745128  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:43.745783  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:46.244989  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:48.246969  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:50.746437  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:53.245326  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:55.245828  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:20:57.745856  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:00.246557  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:02.744786  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:04.746562  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:07.245420  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:09.245670  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:11.744973  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:13.745177  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:16.245464  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:18.744989  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:20.746965  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:23.244681  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:25.244841  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:27.744636  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:29.745818  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:32.246573  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:34.745875  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:37.245060  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:39.745004  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:41.745971  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:44.244625  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:46.245111  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:48.745603  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:51.245211  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:53.245421  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:55.245554  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:57.745070  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:21:59.745169  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:02.244961  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:04.745609  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:07.245279  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:09.746122  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:12.245759  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:14.744897  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:16.745423  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:18.745775  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:20.745826  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:23.245475  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:25.745413  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:28.245114  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:30.245171  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:32.245728  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:34.744900  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:36.745231  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:39.244747  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:41.244888  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:43.745835  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:46.245192  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:48.744475  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:50.745239  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:53.244372  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:55.244622  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:57.244952  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:22:59.245050  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:01.744607  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:03.744831  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:05.745178  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:08.244827  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:10.244953  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:12.244999  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	W0919 23:23:14.245859  596156 pod_ready.go:104] pod "coredns-66bc5c9577-qlvf4" is not "Ready", error: <nil>
	I0919 23:23:14.735304  596156 pod_ready.go:86] duration metric: took 3m59.996270554s for pod "coredns-66bc5c9577-qlvf4" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:23:14.735336  596156 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 23:23:14.735350  596156 pod_ready.go:40] duration metric: took 4m0.000561127s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:23:14.736745  596156 out.go:203] 
	W0919 23:23:14.737900  596156 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 23:23:14.738999  596156 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (275.63s)
E0919 23:26:31.598277  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (277.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0919 23:20:27.674582  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: exit status 80 (4m35.501071887s)

                                                
                                                
-- stdout --
	* [old-k8s-version-359569] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "old-k8s-version-359569" primary control-plane node in "old-k8s-version-359569" cluster
	* Pulling base image v0.0.48 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:19:33.489854  614852 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:19:33.490179  614852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:19:33.490190  614852 out.go:374] Setting ErrFile to fd 2...
	I0919 23:19:33.490198  614852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:19:33.490466  614852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:19:33.491067  614852 out.go:368] Setting JSON to false
	I0919 23:19:33.492756  614852 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7309,"bootTime":1758316664,"procs":366,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:19:33.492869  614852 start.go:140] virtualization: kvm guest
	I0919 23:19:33.494796  614852 out.go:179] * [old-k8s-version-359569] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:19:33.495883  614852 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:19:33.495905  614852 notify.go:220] Checking for updates...
	I0919 23:19:33.497718  614852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:19:33.498884  614852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:19:33.499891  614852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 23:19:33.501003  614852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:19:33.502087  614852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:19:33.503472  614852 config.go:182] Loaded profile config "bridge-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:19:33.503637  614852 config.go:182] Loaded profile config "flannel-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:19:33.503718  614852 config.go:182] Loaded profile config "kubenet-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:19:33.503847  614852 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:19:33.528764  614852 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:19:33.528874  614852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:19:33.589727  614852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:19:33.579698037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:19:33.589851  614852 docker.go:318] overlay module found
	I0919 23:19:33.592892  614852 out.go:179] * Using the docker driver based on user configuration
	I0919 23:19:33.594115  614852 start.go:304] selected driver: docker
	I0919 23:19:33.594132  614852 start.go:918] validating driver "docker" against <nil>
	I0919 23:19:33.594143  614852 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:19:33.594836  614852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:19:33.658353  614852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 23:19:33.647263957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:19:33.658593  614852 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:19:33.658898  614852 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:19:33.660601  614852 out.go:179] * Using Docker driver with root privileges
	I0919 23:19:33.661798  614852 cni.go:84] Creating CNI manager for ""
	I0919 23:19:33.661904  614852 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:19:33.661919  614852 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:19:33.662014  614852 start.go:348] cluster config:
	{Name:old-k8s-version-359569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-359569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0919 23:19:33.663321  614852 out.go:179] * Starting "old-k8s-version-359569" primary control-plane node in "old-k8s-version-359569" cluster
	I0919 23:19:33.664281  614852 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 23:19:33.665643  614852 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:19:33.666634  614852 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0919 23:19:33.666669  614852 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:19:33.666678  614852 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0919 23:19:33.666770  614852 cache.go:58] Caching tarball of preloaded images
	I0919 23:19:33.666848  614852 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:19:33.666859  614852 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I0919 23:19:33.666963  614852 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/config.json ...
	I0919 23:19:33.666981  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/config.json: {Name:mk15e8951fa0a42cf0aca174410a43c217f16c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:33.689196  614852 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:19:33.689230  614852 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:19:33.689247  614852 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:19:33.689273  614852 start.go:360] acquireMachinesLock for old-k8s-version-359569: {Name:mkf7066d39df53b45b93cd06b473f7535c4a2cee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:19:33.689378  614852 start.go:364] duration metric: took 87.445µs to acquireMachinesLock for "old-k8s-version-359569"
	I0919 23:19:33.689404  614852 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-359569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-359569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:19:33.689467  614852 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:19:33.691537  614852 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:19:33.691928  614852 start.go:159] libmachine.API.Create for "old-k8s-version-359569" (driver="docker")
	I0919 23:19:33.691981  614852 client.go:168] LocalClient.Create starting
	I0919 23:19:33.692075  614852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 23:19:33.692122  614852 main.go:141] libmachine: Decoding PEM data...
	I0919 23:19:33.692150  614852 main.go:141] libmachine: Parsing certificate...
	I0919 23:19:33.692216  614852 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 23:19:33.692245  614852 main.go:141] libmachine: Decoding PEM data...
	I0919 23:19:33.692260  614852 main.go:141] libmachine: Parsing certificate...
	I0919 23:19:33.692749  614852 cli_runner.go:164] Run: docker network inspect old-k8s-version-359569 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:19:33.710409  614852 cli_runner.go:211] docker network inspect old-k8s-version-359569 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:19:33.710537  614852 network_create.go:284] running [docker network inspect old-k8s-version-359569] to gather additional debugging logs...
	I0919 23:19:33.710565  614852 cli_runner.go:164] Run: docker network inspect old-k8s-version-359569
	W0919 23:19:33.729702  614852 cli_runner.go:211] docker network inspect old-k8s-version-359569 returned with exit code 1
	I0919 23:19:33.729735  614852 network_create.go:287] error running [docker network inspect old-k8s-version-359569]: docker network inspect old-k8s-version-359569: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-359569 not found
	I0919 23:19:33.729758  614852 network_create.go:289] output of [docker network inspect old-k8s-version-359569]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-359569 not found
	
	** /stderr **
	I0919 23:19:33.729882  614852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:19:33.748561  614852 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db7021220859 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:a3:92:23:56:8a} reservation:<nil>}
	I0919 23:19:33.749377  614852 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-683ec4c6685e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:9d:60:92:e5:85} reservation:<nil>}
	I0919 23:19:33.750382  614852 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b9a40fa74e58 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:8a:56:fb:db:9d} reservation:<nil>}
	I0919 23:19:33.751037  614852 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c04692c8d5c2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ae:5a:82:94:29:f8} reservation:<nil>}
	I0919 23:19:33.751749  614852 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a5a909e468ff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:8a:1f:ea:8a:58:af} reservation:<nil>}
	I0919 23:19:33.752323  614852 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-eb892d8c86ab IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:51:ff:88:68:3a} reservation:<nil>}
	I0919 23:19:33.753181  614852 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d5aa00}
	I0919 23:19:33.753211  614852 network_create.go:124] attempt to create docker network old-k8s-version-359569 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0919 23:19:33.753270  614852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-359569 old-k8s-version-359569
	I0919 23:19:33.818104  614852 network_create.go:108] docker network old-k8s-version-359569 192.168.103.0/24 created
	I0919 23:19:33.818141  614852 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-359569" container
	I0919 23:19:33.818218  614852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:19:33.839980  614852 cli_runner.go:164] Run: docker volume create old-k8s-version-359569 --label name.minikube.sigs.k8s.io=old-k8s-version-359569 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:19:33.860336  614852 oci.go:103] Successfully created a docker volume old-k8s-version-359569
	I0919 23:19:33.860434  614852 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-359569-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-359569 --entrypoint /usr/bin/test -v old-k8s-version-359569:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:19:34.258637  614852 oci.go:107] Successfully prepared a docker volume old-k8s-version-359569
	I0919 23:19:34.258688  614852 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0919 23:19:34.258715  614852 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:19:34.258783  614852 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-359569:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:19:37.259305  614852 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-359569:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.000467067s)
	I0919 23:19:37.259346  614852 kic.go:203] duration metric: took 3.000626472s to extract preloaded images to volume ...
	W0919 23:19:37.259453  614852 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:19:37.259542  614852 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:19:37.259599  614852 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:19:37.329320  614852 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-359569 --name old-k8s-version-359569 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-359569 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-359569 --network old-k8s-version-359569 --ip 192.168.103.2 --volume old-k8s-version-359569:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:19:37.637791  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Running}}
	I0919 23:19:37.656763  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Status}}
	I0919 23:19:37.676573  614852 cli_runner.go:164] Run: docker exec old-k8s-version-359569 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:19:37.722315  614852 oci.go:144] the created container "old-k8s-version-359569" has a running status.
	I0919 23:19:37.722346  614852 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa...
	I0919 23:19:37.920585  614852 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:19:37.949998  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Status}}
	I0919 23:19:37.974222  614852 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:19:37.974249  614852 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-359569 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:19:38.033759  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Status}}
	I0919 23:19:38.053992  614852 machine.go:93] provisionDockerMachine start ...
	I0919 23:19:38.054095  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:38.073854  614852 main.go:141] libmachine: Using SSH client type: native
	I0919 23:19:38.074237  614852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0919 23:19:38.074262  614852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:19:38.220781  614852 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-359569
	
	I0919 23:19:38.220819  614852 ubuntu.go:182] provisioning hostname "old-k8s-version-359569"
	I0919 23:19:38.220881  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:38.245989  614852 main.go:141] libmachine: Using SSH client type: native
	I0919 23:19:38.246323  614852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0919 23:19:38.246345  614852 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-359569 && echo "old-k8s-version-359569" | sudo tee /etc/hostname
	I0919 23:19:38.409833  614852 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-359569
	
	I0919 23:19:38.409935  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:38.429577  614852 main.go:141] libmachine: Using SSH client type: native
	I0919 23:19:38.429804  614852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0919 23:19:38.429823  614852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-359569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-359569/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-359569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:19:38.571789  614852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:19:38.571827  614852 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:19:38.571874  614852 ubuntu.go:190] setting up certificates
	I0919 23:19:38.571898  614852 provision.go:84] configureAuth start
	I0919 23:19:38.571970  614852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-359569
	I0919 23:19:38.590533  614852 provision.go:143] copyHostCerts
	I0919 23:19:38.590606  614852 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:19:38.590620  614852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:19:38.590692  614852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:19:38.590787  614852 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:19:38.590795  614852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:19:38.590821  614852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:19:38.590878  614852 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:19:38.590885  614852 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:19:38.590907  614852 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:19:38.590959  614852 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-359569 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-359569]
	I0919 23:19:38.771371  614852 provision.go:177] copyRemoteCerts
	I0919 23:19:38.771446  614852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:19:38.771550  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:38.791937  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:19:38.896093  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:19:38.925604  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0919 23:19:38.955086  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:19:38.984331  614852 provision.go:87] duration metric: took 412.413215ms to configureAuth
	I0919 23:19:38.984360  614852 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:19:38.984560  614852 config.go:182] Loaded profile config "old-k8s-version-359569": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0919 23:19:38.984609  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:39.004118  614852 main.go:141] libmachine: Using SSH client type: native
	I0919 23:19:39.004350  614852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0919 23:19:39.004362  614852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:19:39.144594  614852 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:19:39.144618  614852 ubuntu.go:71] root file system type: overlay
	I0919 23:19:39.144765  614852 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:19:39.144846  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:39.162762  614852 main.go:141] libmachine: Using SSH client type: native
	I0919 23:19:39.163077  614852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0919 23:19:39.163184  614852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:19:39.327796  614852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:19:39.327921  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:39.349105  614852 main.go:141] libmachine: Using SSH client type: native
	I0919 23:19:39.349418  614852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I0919 23:19:39.349449  614852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:19:40.536328  614852 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 23:19:39.324862906 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 23:19:40.536362  614852 machine.go:96] duration metric: took 2.48234212s to provisionDockerMachine
	I0919 23:19:40.536376  614852 client.go:171] duration metric: took 6.844384152s to LocalClient.Create
	I0919 23:19:40.536399  614852 start.go:167] duration metric: took 6.844482826s to libmachine.API.Create "old-k8s-version-359569"
	I0919 23:19:40.536410  614852 start.go:293] postStartSetup for "old-k8s-version-359569" (driver="docker")
	I0919 23:19:40.536427  614852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:19:40.536487  614852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:19:40.536565  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:40.556279  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:19:40.657368  614852 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:19:40.661482  614852 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:19:40.661562  614852 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:19:40.661578  614852 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:19:40.661586  614852 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:19:40.661602  614852 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:19:40.661653  614852 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:19:40.661738  614852 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:19:40.661863  614852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:19:40.672131  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:19:40.702851  614852 start.go:296] duration metric: took 166.421756ms for postStartSetup
	I0919 23:19:40.703216  614852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-359569
	I0919 23:19:40.720929  614852 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/config.json ...
	I0919 23:19:40.721236  614852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:19:40.721289  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:40.739893  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:19:40.835107  614852 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:19:40.840191  614852 start.go:128] duration metric: took 7.150709987s to createHost
	I0919 23:19:40.840216  614852 start.go:83] releasing machines lock for "old-k8s-version-359569", held for 7.150827523s
	I0919 23:19:40.840295  614852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-359569
	I0919 23:19:40.858283  614852 ssh_runner.go:195] Run: cat /version.json
	I0919 23:19:40.858341  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:40.858369  614852 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:19:40.858458  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:19:40.877717  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:19:40.878593  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:19:40.972342  614852 ssh_runner.go:195] Run: systemctl --version
	I0919 23:19:41.046392  614852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:19:41.052202  614852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:19:41.084907  614852 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:19:41.084993  614852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:19:41.118775  614852 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:19:41.118810  614852 start.go:495] detecting cgroup driver to use...
	I0919 23:19:41.118863  614852 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:19:41.118995  614852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:19:41.140516  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 23:19:41.155875  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:19:41.168246  614852 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:19:41.168333  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:19:41.181010  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:19:41.193427  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:19:41.206004  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:19:41.218164  614852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:19:41.230052  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:19:41.243270  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:19:41.256338  614852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:19:41.271659  614852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:19:41.286680  614852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:19:41.300873  614852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:41.391302  614852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:19:41.482698  614852 start.go:495] detecting cgroup driver to use...
	I0919 23:19:41.482756  614852 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:19:41.482810  614852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:19:41.498467  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:19:41.515860  614852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:19:41.537369  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:19:41.553911  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:19:41.567863  614852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:19:41.589054  614852 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:19:41.593410  614852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:19:41.605128  614852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 23:19:41.626294  614852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:19:41.697592  614852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:19:41.772797  614852 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:19:41.772909  614852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:19:41.795920  614852 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:19:41.808693  614852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:41.883085  614852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:19:42.672101  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:19:42.685194  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:19:42.698524  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:19:42.711535  614852 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:19:42.790800  614852 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:19:42.863209  614852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:42.937663  614852 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:19:42.957483  614852 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:19:42.970720  614852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:43.044554  614852 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:19:43.126725  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:19:43.140849  614852 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:19:43.140920  614852 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:19:43.145128  614852 start.go:563] Will wait 60s for crictl version
	I0919 23:19:43.145192  614852 ssh_runner.go:195] Run: which crictl
	I0919 23:19:43.149533  614852 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:19:43.191607  614852 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:19:43.191687  614852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:19:43.221143  614852 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:19:43.251321  614852 out.go:252] * Preparing Kubernetes v1.28.0 on Docker 28.4.0 ...
	I0919 23:19:43.251416  614852 cli_runner.go:164] Run: docker network inspect old-k8s-version-359569 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:19:43.269610  614852 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0919 23:19:43.274087  614852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:19:43.286523  614852 kubeadm.go:875] updating cluster {Name:old-k8s-version-359569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-359569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:19:43.286638  614852 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0919 23:19:43.286682  614852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:19:43.308791  614852 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:19:43.308817  614852 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:19:43.308879  614852 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:19:43.335053  614852 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:19:43.335083  614852 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:19:43.335098  614852 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.28.0 docker true true} ...
	I0919 23:19:43.335229  614852 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-359569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-359569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:19:43.335301  614852 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:19:43.399264  614852 cni.go:84] Creating CNI manager for ""
	I0919 23:19:43.399305  614852 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:19:43.399320  614852 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:19:43.399358  614852 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-359569 NodeName:old-k8s-version-359569 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:19:43.399552  614852 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "old-k8s-version-359569"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:19:43.399633  614852 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0919 23:19:43.412194  614852 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:19:43.412282  614852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:19:43.425620  614852 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0919 23:19:43.453135  614852 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:19:43.474826  614852 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0919 23:19:43.497387  614852 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:19:43.501763  614852 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:19:43.516199  614852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:19:43.597457  614852 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:19:43.619606  614852 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569 for IP: 192.168.103.2
	I0919 23:19:43.619630  614852 certs.go:194] generating shared ca certs ...
	I0919 23:19:43.619648  614852 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:43.619798  614852 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:19:43.619846  614852 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:19:43.619857  614852 certs.go:256] generating profile certs ...
	I0919 23:19:43.619911  614852 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/client.key
	I0919 23:19:43.619923  614852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/client.crt with IP's: []
	I0919 23:19:43.775045  614852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/client.crt ...
	I0919 23:19:43.775080  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/client.crt: {Name:mkff550f22ee579d142313ee482513275c16633e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:43.775275  614852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/client.key ...
	I0919 23:19:43.775289  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/client.key: {Name:mkaa7e0a87339c171a4347805d5679509b09c626 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:43.775431  614852 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.key.654f5f18
	I0919 23:19:43.775457  614852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.crt.654f5f18 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0919 23:19:44.068407  614852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.crt.654f5f18 ...
	I0919 23:19:44.068437  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.crt.654f5f18: {Name:mke096ab35f79240070ad7657467a0aa683efd26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:44.068662  614852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.key.654f5f18 ...
	I0919 23:19:44.068695  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.key.654f5f18: {Name:mk0ceea91b0bceac9c6ecb00177dc23e7a046d8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:44.068814  614852 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.crt.654f5f18 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.crt
	I0919 23:19:44.068917  614852 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.key.654f5f18 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.key
	I0919 23:19:44.069000  614852 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.key
	I0919 23:19:44.069019  614852 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.crt with IP's: []
	I0919 23:19:44.193748  614852 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.crt ...
	I0919 23:19:44.193779  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.crt: {Name:mk0c1b1f38475679590397e6a017a33295beeb2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:44.193982  614852 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.key ...
	I0919 23:19:44.194001  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.key: {Name:mk6c29b00d45eb885b37cbf9bef9bc050f33dfa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:19:44.194229  614852 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:19:44.194279  614852 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:19:44.194298  614852 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:19:44.194331  614852 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:19:44.194363  614852 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:19:44.194404  614852 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:19:44.194462  614852 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:19:44.195153  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:19:44.224630  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:19:44.252258  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:19:44.281068  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:19:44.308716  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 23:19:44.338863  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:19:44.369513  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:19:44.398331  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/old-k8s-version-359569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:19:44.429532  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:19:44.466770  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:19:44.503699  614852 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:19:44.535604  614852 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:19:44.559485  614852 ssh_runner.go:195] Run: openssl version
	I0919 23:19:44.565998  614852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:19:44.578766  614852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:19:44.583419  614852 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:19:44.583492  614852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:19:44.591619  614852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:19:44.603994  614852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:19:44.615692  614852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:19:44.620153  614852 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:19:44.620225  614852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:19:44.628420  614852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:19:44.639562  614852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:19:44.651365  614852 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:19:44.656180  614852 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:19:44.656235  614852 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:19:44.663732  614852 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:19:44.675893  614852 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:19:44.679804  614852 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:19:44.679868  614852 kubeadm.go:392] StartCluster: {Name:old-k8s-version-359569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-359569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:19:44.680002  614852 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:19:44.702316  614852 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:19:44.712398  614852 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:19:44.722482  614852 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:19:44.722556  614852 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:19:44.732107  614852 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:19:44.732122  614852 kubeadm.go:157] found existing configuration files:
	
	I0919 23:19:44.732162  614852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:19:44.741889  614852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:19:44.741952  614852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:19:44.752405  614852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:19:44.762588  614852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:19:44.762644  614852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:19:44.773003  614852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:19:44.783441  614852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:19:44.783509  614852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:19:44.793736  614852 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:19:44.805074  614852 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:19:44.805133  614852 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:19:44.815796  614852 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:19:44.864921  614852 kubeadm.go:310] [init] Using Kubernetes version: v1.28.0
	I0919 23:19:44.864998  614852 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:19:44.908048  614852 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:19:44.908149  614852 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:19:44.908206  614852 kubeadm.go:310] OS: Linux
	I0919 23:19:44.908289  614852 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:19:44.908395  614852 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:19:44.908483  614852 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:19:44.908583  614852 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:19:44.908646  614852 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:19:44.908715  614852 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:19:44.908805  614852 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:19:44.908849  614852 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:19:44.983596  614852 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:19:44.983734  614852 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:19:44.983876  614852 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 23:19:45.227592  614852 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:19:45.231170  614852 out.go:252]   - Generating certificates and keys ...
	I0919 23:19:45.231273  614852 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:19:45.231359  614852 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:19:45.347493  614852 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:19:45.528779  614852 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:19:45.628935  614852 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:19:45.804320  614852 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:19:46.279874  614852 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:19:46.280025  614852 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-359569] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0919 23:19:46.601558  614852 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:19:46.601774  614852 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-359569] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0919 23:19:46.850654  614852 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:19:47.103198  614852 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:19:47.343696  614852 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:19:47.343791  614852 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:19:47.593033  614852 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:19:47.762814  614852 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:19:47.898191  614852 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:19:48.155821  614852 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:19:48.156318  614852 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:19:48.160715  614852 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:19:48.162848  614852 out.go:252]   - Booting up control plane ...
	I0919 23:19:48.162993  614852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:19:48.163142  614852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:19:48.164112  614852 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:19:48.186030  614852 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:19:48.186931  614852 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:19:48.186994  614852 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:19:48.274309  614852 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 23:19:52.776904  614852 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502659 seconds
	I0919 23:19:52.777054  614852 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:19:52.791138  614852 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:19:53.311945  614852 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:19:53.312195  614852 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-359569 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:19:53.824478  614852 kubeadm.go:310] [bootstrap-token] Using token: geefs9.9miys9433ejjif8c
	I0919 23:19:53.825828  614852 out.go:252]   - Configuring RBAC rules ...
	I0919 23:19:53.825981  614852 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:19:53.832928  614852 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:19:53.841415  614852 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:19:53.844460  614852 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:19:53.847327  614852 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:19:53.850206  614852 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:19:53.862878  614852 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:19:54.065851  614852 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:19:54.237690  614852 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:19:54.238980  614852 kubeadm.go:310] 
	I0919 23:19:54.239071  614852 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:19:54.239083  614852 kubeadm.go:310] 
	I0919 23:19:54.239256  614852 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:19:54.239277  614852 kubeadm.go:310] 
	I0919 23:19:54.239349  614852 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:19:54.239449  614852 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:19:54.239524  614852 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:19:54.239533  614852 kubeadm.go:310] 
	I0919 23:19:54.239590  614852 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:19:54.239597  614852 kubeadm.go:310] 
	I0919 23:19:54.239702  614852 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:19:54.239722  614852 kubeadm.go:310] 
	I0919 23:19:54.239777  614852 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:19:54.239874  614852 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:19:54.239999  614852 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:19:54.240016  614852 kubeadm.go:310] 
	I0919 23:19:54.240139  614852 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:19:54.240235  614852 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:19:54.240242  614852 kubeadm.go:310] 
	I0919 23:19:54.240354  614852 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token geefs9.9miys9433ejjif8c \
	I0919 23:19:54.240554  614852 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 23:19:54.240603  614852 kubeadm.go:310] 	--control-plane 
	I0919 23:19:54.240614  614852 kubeadm.go:310] 
	I0919 23:19:54.240766  614852 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:19:54.240782  614852 kubeadm.go:310] 
	I0919 23:19:54.240903  614852 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token geefs9.9miys9433ejjif8c \
	I0919 23:19:54.241077  614852 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 23:19:54.245105  614852 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:19:54.245264  614852 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:19:54.245299  614852 cni.go:84] Creating CNI manager for ""
	I0919 23:19:54.245316  614852 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:19:54.247613  614852 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 23:19:54.248712  614852 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:19:54.260573  614852 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:19:54.281829  614852 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:19:54.281871  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:54.281978  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-359569 minikube.k8s.io/updated_at=2025_09_19T23_19_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=old-k8s-version-359569 minikube.k8s.io/primary=true
	I0919 23:19:54.366162  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:54.382672  614852 ops.go:34] apiserver oom_adj: -16
	I0919 23:19:54.866679  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:55.366649  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:55.866951  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:56.366742  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:56.866721  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:57.367138  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:57.867160  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:58.366744  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:58.867156  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:59.366649  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:19:59.867183  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:00.366632  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:00.867034  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:01.366622  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:01.866929  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:02.367149  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:02.866630  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:03.366728  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:03.866626  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:04.366488  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:04.866696  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:05.366761  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:05.867305  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:06.366363  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:06.866920  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:07.366736  614852 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:20:07.451593  614852 kubeadm.go:1105] duration metric: took 13.169768783s to wait for elevateKubeSystemPrivileges
	I0919 23:20:07.451624  614852 kubeadm.go:394] duration metric: took 22.771765568s to StartCluster
	I0919 23:20:07.451643  614852 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:20:07.451717  614852 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:20:07.453243  614852 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:20:07.453551  614852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:20:07.453561  614852 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:20:07.453620  614852 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:20:07.453765  614852 config.go:182] Loaded profile config "old-k8s-version-359569": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0919 23:20:07.453773  614852 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-359569"
	I0919 23:20:07.453801  614852 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-359569"
	I0919 23:20:07.453838  614852 host.go:66] Checking if "old-k8s-version-359569" exists ...
	I0919 23:20:07.453867  614852 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-359569"
	I0919 23:20:07.453920  614852 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-359569"
	I0919 23:20:07.454356  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Status}}
	I0919 23:20:07.454402  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Status}}
	I0919 23:20:07.456644  614852 out.go:179] * Verifying Kubernetes components...
	I0919 23:20:07.460316  614852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:20:07.484064  614852 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:20:07.485545  614852 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:20:07.485566  614852 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:20:07.485626  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:20:07.488283  614852 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-359569"
	I0919 23:20:07.488331  614852 host.go:66] Checking if "old-k8s-version-359569" exists ...
	I0919 23:20:07.488828  614852 cli_runner.go:164] Run: docker container inspect old-k8s-version-359569 --format={{.State.Status}}
	I0919 23:20:07.524156  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:20:07.524381  614852 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:20:07.524409  614852 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:20:07.524474  614852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-359569
	I0919 23:20:07.554073  614852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/old-k8s-version-359569/id_rsa Username:docker}
	I0919 23:20:07.578360  614852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:20:07.626483  614852 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:20:07.655048  614852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:20:07.695614  614852 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:20:08.420288  614852 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:20:08.421373  614852 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-359569" to be "Ready" ...
	I0919 23:20:08.432565  614852 node_ready.go:49] node "old-k8s-version-359569" is "Ready"
	I0919 23:20:08.432602  614852 node_ready.go:38] duration metric: took 11.184625ms for node "old-k8s-version-359569" to be "Ready" ...
	I0919 23:20:08.432631  614852 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:20:08.432690  614852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:20:08.617998  614852 api_server.go:72] duration metric: took 1.164398538s to wait for apiserver process to appear ...
	I0919 23:20:08.618029  614852 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:20:08.618052  614852 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0919 23:20:08.625173  614852 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0919 23:20:08.626475  614852 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:20:08.626639  614852 api_server.go:141] control plane version: v1.28.0
	I0919 23:20:08.626667  614852 api_server.go:131] duration metric: took 8.629745ms to wait for apiserver health ...
	I0919 23:20:08.626681  614852 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:20:08.627391  614852 addons.go:514] duration metric: took 1.173772754s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:20:08.631726  614852 system_pods.go:59] 8 kube-system pods found
	I0919 23:20:08.631775  614852 system_pods.go:61] "coredns-5dd5756b68-5ks6s" [9abc3124-e7b5-4a59-b0f0-92fc49035974] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:20:08.631792  614852 system_pods.go:61] "coredns-5dd5756b68-q75nl" [0fafe72c-6f1b-4001-971f-54b044acb1cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:20:08.631799  614852 system_pods.go:61] "etcd-old-k8s-version-359569" [63d47d38-c1d6-4cc4-96f8-78303a9ea47c] Running
	I0919 23:20:08.631812  614852 system_pods.go:61] "kube-apiserver-old-k8s-version-359569" [f132cc77-f5cc-454c-a7c1-6215572c0277] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:20:08.631820  614852 system_pods.go:61] "kube-controller-manager-old-k8s-version-359569" [5780e34f-8ed6-41b3-a398-c9d982f5725e] Running
	I0919 23:20:08.631827  614852 system_pods.go:61] "kube-proxy-hvp2z" [8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:20:08.631831  614852 system_pods.go:61] "kube-scheduler-old-k8s-version-359569" [3934713a-ad2f-4ada-9bb0-d59a926db817] Running
	I0919 23:20:08.631842  614852 system_pods.go:61] "storage-provisioner" [ef0a9cd7-6497-4877-8fc6-286067f0db01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:20:08.631848  614852 system_pods.go:74] duration metric: took 5.157701ms to wait for pod list to return data ...
	I0919 23:20:08.631865  614852 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:20:08.634042  614852 default_sa.go:45] found service account: "default"
	I0919 23:20:08.634068  614852 default_sa.go:55] duration metric: took 2.19577ms for default service account to be created ...
	I0919 23:20:08.634079  614852 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:20:08.638419  614852 system_pods.go:86] 8 kube-system pods found
	I0919 23:20:08.638490  614852 system_pods.go:89] "coredns-5dd5756b68-5ks6s" [9abc3124-e7b5-4a59-b0f0-92fc49035974] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:20:08.638523  614852 system_pods.go:89] "coredns-5dd5756b68-q75nl" [0fafe72c-6f1b-4001-971f-54b044acb1cd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:20:08.638532  614852 system_pods.go:89] "etcd-old-k8s-version-359569" [63d47d38-c1d6-4cc4-96f8-78303a9ea47c] Running
	I0919 23:20:08.638541  614852 system_pods.go:89] "kube-apiserver-old-k8s-version-359569" [f132cc77-f5cc-454c-a7c1-6215572c0277] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:20:08.638551  614852 system_pods.go:89] "kube-controller-manager-old-k8s-version-359569" [5780e34f-8ed6-41b3-a398-c9d982f5725e] Running
	I0919 23:20:08.638571  614852 system_pods.go:89] "kube-proxy-hvp2z" [8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:20:08.638577  614852 system_pods.go:89] "kube-scheduler-old-k8s-version-359569" [3934713a-ad2f-4ada-9bb0-d59a926db817] Running
	I0919 23:20:08.638588  614852 system_pods.go:89] "storage-provisioner" [ef0a9cd7-6497-4877-8fc6-286067f0db01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:20:08.638598  614852 system_pods.go:126] duration metric: took 4.512241ms to wait for k8s-apps to be running ...
	I0919 23:20:08.638607  614852 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:20:08.638664  614852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:20:08.653768  614852 system_svc.go:56] duration metric: took 15.144977ms WaitForService to wait for kubelet
	I0919 23:20:08.653811  614852 kubeadm.go:578] duration metric: took 1.20021886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:20:08.653837  614852 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:20:08.656932  614852 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:20:08.656974  614852 node_conditions.go:123] node cpu capacity is 8
	I0919 23:20:08.656988  614852 node_conditions.go:105] duration metric: took 3.146443ms to run NodePressure ...
	I0919 23:20:08.657001  614852 start.go:241] waiting for startup goroutines ...
	I0919 23:20:08.925193  614852 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-359569" context rescaled to 1 replicas
	I0919 23:20:08.925242  614852 start.go:246] waiting for cluster config update ...
	I0919 23:20:08.925258  614852 start.go:255] writing updated cluster config ...
	I0919 23:20:08.925653  614852 ssh_runner.go:195] Run: rm -f paused
	I0919 23:20:08.930211  614852 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:20:08.935657  614852 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5ks6s" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:20:10.941254  614852 pod_ready.go:104] pod "coredns-5dd5756b68-5ks6s" is not "Ready", error: <nil>
	W0919 23:20:12.942347  614852 pod_ready.go:104] pod "coredns-5dd5756b68-5ks6s" is not "Ready", error: <nil>
	W0919 23:20:15.442133  614852 pod_ready.go:104] pod "coredns-5dd5756b68-5ks6s" is not "Ready", error: <nil>
	W0919 23:20:17.942096  614852 pod_ready.go:104] pod "coredns-5dd5756b68-5ks6s" is not "Ready", error: <nil>
	W0919 23:20:19.942820  614852 pod_ready.go:104] pod "coredns-5dd5756b68-5ks6s" is not "Ready", error: <nil>
	I0919 23:20:22.438493  614852 pod_ready.go:99] pod "coredns-5dd5756b68-5ks6s" in "kube-system" namespace is gone: getting pod "coredns-5dd5756b68-5ks6s" in "kube-system" namespace (will retry): pods "coredns-5dd5756b68-5ks6s" not found
	I0919 23:20:22.438539  614852 pod_ready.go:86] duration metric: took 13.502847262s for pod "coredns-5dd5756b68-5ks6s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:22.438554  614852 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-q75nl" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:20:24.444758  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:26.444999  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:28.445125  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:30.445926  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:32.945031  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:34.945531  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:36.945710  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:39.443996  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:41.445075  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:43.946159  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:46.445333  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:48.945097  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:50.945798  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:53.445376  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:55.945291  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:20:58.445356  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:00.945223  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:03.446296  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:05.945568  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:08.444230  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:10.445471  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:12.446383  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:14.945639  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:17.445566  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:19.946453  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:22.446297  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:24.945955  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:27.444607  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:29.444826  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:31.445941  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:33.945253  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:36.447639  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:38.944889  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:41.445368  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:43.944693  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:45.945540  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:48.445154  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:50.445213  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:52.944872  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:55.445769  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:21:57.944854  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:00.444833  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:02.944296  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:04.945079  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:07.445246  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:09.944981  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:12.444639  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:14.944876  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:17.445146  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:19.944376  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:21.944597  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:23.944998  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:25.945102  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:28.445175  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:30.445319  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:32.945122  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:35.444550  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:37.944989  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:40.445572  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:42.944893  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:45.444490  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:47.943879  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:49.944118  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:51.944494  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:54.444707  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:56.944026  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:22:58.944472  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:00.944674  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:03.443993  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:05.444734  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:07.943823  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:09.944783  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:11.945275  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:14.445233  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:16.945282  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:18.945723  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:21.445716  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:23.944601  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:26.444883  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:28.445003  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:30.445544  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:32.944856  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:34.945046  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:36.946315  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:39.445666  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:41.945070  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:43.946798  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:46.444274  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:48.444778  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:50.445605  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:52.944464  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:54.944755  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:57.444603  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:59.445059  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:01.449514  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:03.944093  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:05.944580  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:08.444129  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:24:08.930587  614852 pod_ready.go:86] duration metric: took 3m46.492017742s for pod "coredns-5dd5756b68-q75nl" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:24:08.930640  614852 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 23:24:08.930657  614852 pod_ready.go:40] duration metric: took 4m0.000400198s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:08.932547  614852 out.go:203] 
	W0919 23:24:08.933705  614852 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 23:24:08.934737  614852 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-359569
helpers_test.go:243: (dbg) docker inspect old-k8s-version-359569:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05",
	        "Created": "2025-09-19T23:19:37.347852462Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 615743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:19:37.405623577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/hosts",
	        "LogPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05-json.log",
	        "Name": "/old-k8s-version-359569",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-359569:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-359569",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05",
	                "LowerDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-359569",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-359569/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-359569",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-359569",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-359569",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b726aaf96c946931a6120d4e7c36f04b94b2159be83e38fe449d17725bdb8af4",
	            "SandboxKey": "/var/run/docker/netns/b726aaf96c94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-359569": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:17:79:ad:35:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1de8892b98e15a33d7c5eadc8f8aa4724fe6ba0a68c7bcaff3b9263e169c715",
	                    "EndpointID": "8c2d9fcf272337f0d2e8ea4b1ed4c7a62e3453e15f6d7576cebbebdef38b3afe",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-359569",
	                        "1ae574ad604d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359569 -n old-k8s-version-359569
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359569 logs -n 25
E0919 23:24:09.586961  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                    ARGS                                                                                    │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kubenet-361266 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                    │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo cat /var/lib/kubelet/config.yaml                                                                                                                    │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl status docker --all --full --no-pager                                                                                                     │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl cat docker --no-pager                                                                                                                     │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo cat /etc/docker/daemon.json                                                                                                                         │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo docker system info                                                                                                                                  │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl status cri-docker --all --full --no-pager                                                                                                 │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p bridge-361266                                                                                                                                                           │ bridge-361266                │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl cat cri-docker --no-pager                                                                                                                 │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                            │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                      │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo cri-dockerd --version                                                                                                                               │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl status containerd --all --full --no-pager                                                                                                 │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl cat containerd --no-pager                                                                                                                 │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo cat /lib/systemd/system/containerd.service                                                                                                          │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                   │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p kubenet-361266 sudo cat /etc/containerd/config.toml                                                                                                                     │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo containerd config dump                                                                                                                              │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo systemctl status crio --all --full --no-pager                                                                                                       │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │                     │
	│ ssh     │ -p kubenet-361266 sudo systemctl cat crio --no-pager                                                                                                                       │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                             │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ ssh     │ -p kubenet-361266 sudo crio config                                                                                                                                         │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p kubenet-361266                                                                                                                                                          │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p disable-driver-mounts-481061                                                                                                                                            │ disable-driver-mounts-481061 │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:23:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:23:32.501720  651681 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:23:32.502027  651681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:23:32.502038  651681 out.go:374] Setting ErrFile to fd 2...
	I0919 23:23:32.502042  651681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:23:32.502311  651681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:23:32.502910  651681 out.go:368] Setting JSON to false
	I0919 23:23:32.504294  651681 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7548,"bootTime":1758316664,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:23:32.504396  651681 start.go:140] virtualization: kvm guest
	I0919 23:23:32.506252  651681 out.go:179] * [default-k8s-diff-port-485703] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:23:32.507339  651681 notify.go:220] Checking for updates...
	I0919 23:23:32.507381  651681 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:23:32.508407  651681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:23:32.509479  651681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:23:32.510660  651681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 23:23:32.511700  651681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:23:32.512752  651681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:23:32.517506  651681 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:23:32.517636  651681 config.go:182] Loaded profile config "no-preload-834234": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:23:32.517766  651681 config.go:182] Loaded profile config "old-k8s-version-359569": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I0919 23:23:32.517885  651681 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:23:32.546570  651681 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:23:32.546670  651681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:23:32.608365  651681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:23:32.598057536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:23:32.608467  651681 docker.go:318] overlay module found
	I0919 23:23:32.610097  651681 out.go:179] * Using the docker driver based on user configuration
	I0919 23:23:32.611131  651681 start.go:304] selected driver: docker
	I0919 23:23:32.611145  651681 start.go:918] validating driver "docker" against <nil>
	I0919 23:23:32.611157  651681 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:23:32.611703  651681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:23:32.668830  651681 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:23:32.658698793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:23:32.669021  651681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:23:32.669248  651681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:23:32.671234  651681 out.go:179] * Using Docker driver with root privileges
	I0919 23:23:32.672249  651681 cni.go:84] Creating CNI manager for ""
	I0919 23:23:32.672334  651681 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:23:32.672346  651681 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:23:32.672421  651681 start.go:348] cluster config:
	{Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:23:32.673535  651681 out.go:179] * Starting "default-k8s-diff-port-485703" primary control-plane node in "default-k8s-diff-port-485703" cluster
	I0919 23:23:32.674355  651681 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 23:23:32.675231  651681 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:23:32.676059  651681 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:23:32.676098  651681 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 23:23:32.676108  651681 cache.go:58] Caching tarball of preloaded images
	I0919 23:23:32.676161  651681 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:23:32.676213  651681 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:23:32.676229  651681 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 23:23:32.676353  651681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json ...
	I0919 23:23:32.676379  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json: {Name:mk10c935206e26e015f5ba6c8cb56b59d8222a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:32.696918  651681 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:23:32.696949  651681 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:23:32.696970  651681 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:23:32.697002  651681 start.go:360] acquireMachinesLock for default-k8s-diff-port-485703: {Name:mk6951b47a07a3f8003f766143829366ba3d9245 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:23:32.697115  651681 start.go:364] duration metric: took 89.991µs to acquireMachinesLock for "default-k8s-diff-port-485703"
	I0919 23:23:32.697145  651681 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:23:32.697214  651681 start.go:125] createHost starting for "" (driver="docker")
	W0919 23:23:30.445544  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:32.944856  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:29.744021  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:32.244058  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:23:29.692740  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Running}}
	I0919 23:23:29.712806  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:23:29.732715  648050 cli_runner.go:164] Run: docker exec embed-certs-253767 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:23:29.785729  648050 oci.go:144] the created container "embed-certs-253767" has a running status.
	I0919 23:23:29.785767  648050 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa...
	I0919 23:23:29.854620  648050 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:23:29.884651  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:23:29.907238  648050 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:23:29.907264  648050 kic_runner.go:114] Args: [docker exec --privileged embed-certs-253767 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:23:29.961592  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:23:29.989886  648050 machine.go:93] provisionDockerMachine start ...
	I0919 23:23:29.990067  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:30.015395  648050 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:30.015766  648050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0919 23:23:30.015779  648050 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:23:30.161709  648050 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253767
	
	I0919 23:23:30.161769  648050 ubuntu.go:182] provisioning hostname "embed-certs-253767"
	I0919 23:23:30.161839  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:30.185628  648050 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:30.185989  648050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0919 23:23:30.186011  648050 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253767 && echo "embed-certs-253767" | sudo tee /etc/hostname
	I0919 23:23:30.370556  648050 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253767
	
	I0919 23:23:30.370645  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:30.390019  648050 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:30.390259  648050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0919 23:23:30.390276  648050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253767/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:23:30.531198  648050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:23:30.531233  648050 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:23:30.531261  648050 ubuntu.go:190] setting up certificates
	I0919 23:23:30.531273  648050 provision.go:84] configureAuth start
	I0919 23:23:30.531332  648050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:23:30.549477  648050 provision.go:143] copyHostCerts
	I0919 23:23:30.549559  648050 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:23:30.549572  648050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:23:30.549641  648050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:23:30.549753  648050 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:23:30.549763  648050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:23:30.549789  648050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:23:30.549842  648050 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:23:30.549849  648050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:23:30.549880  648050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:23:30.549926  648050 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253767 san=[127.0.0.1 192.168.94.2 embed-certs-253767 localhost minikube]
	I0919 23:23:30.659486  648050 provision.go:177] copyRemoteCerts
	I0919 23:23:30.659571  648050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:23:30.659610  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:30.676962  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:30.774712  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:23:30.803020  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 23:23:30.829368  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:23:30.854943  648050 provision.go:87] duration metric: took 323.638081ms to configureAuth
	I0919 23:23:30.854978  648050 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:23:30.855162  648050 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:23:30.855241  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:30.872467  648050 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:30.872705  648050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0919 23:23:30.872719  648050 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:23:31.008404  648050 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:23:31.008428  648050 ubuntu.go:71] root file system type: overlay
	I0919 23:23:31.008612  648050 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:23:31.008685  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:31.026755  648050 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:31.026981  648050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0919 23:23:31.027085  648050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:23:31.178854  648050 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:23:31.178932  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:31.197197  648050 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:31.197413  648050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33137 <nil> <nil>}
	I0919 23:23:31.197429  648050 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:23:32.506223  648050 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 23:23:31.175683553 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 23:23:32.506253  648050 machine.go:96] duration metric: took 2.516303116s to provisionDockerMachine
	I0919 23:23:32.506264  648050 client.go:171] duration metric: took 7.739280066s to LocalClient.Create
	I0919 23:23:32.506284  648050 start.go:167] duration metric: took 7.739373705s to libmachine.API.Create "embed-certs-253767"
	I0919 23:23:32.506293  648050 start.go:293] postStartSetup for "embed-certs-253767" (driver="docker")
	I0919 23:23:32.506304  648050 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:23:32.506373  648050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:23:32.506429  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:32.529996  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:32.639854  648050 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:23:32.644339  648050 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:23:32.644381  648050 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:23:32.644398  648050 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:23:32.644411  648050 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:23:32.644428  648050 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:23:32.644527  648050 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:23:32.644646  648050 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:23:32.644916  648050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:23:32.656684  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:23:32.685986  648050 start.go:296] duration metric: took 179.677686ms for postStartSetup
	I0919 23:23:32.686286  648050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:23:32.705010  648050 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/config.json ...
	I0919 23:23:32.705312  648050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:23:32.705362  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:32.725022  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:32.820728  648050 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:23:32.826059  648050 start.go:128] duration metric: took 8.06128887s to createHost
	I0919 23:23:32.826088  648050 start.go:83] releasing machines lock for "embed-certs-253767", held for 8.061449376s
	I0919 23:23:32.826164  648050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:23:32.844595  648050 ssh_runner.go:195] Run: cat /version.json
	I0919 23:23:32.844645  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:32.844692  648050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:23:32.844767  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:32.863467  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:32.864151  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:33.037959  648050 ssh_runner.go:195] Run: systemctl --version
	I0919 23:23:33.043400  648050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:23:33.048203  648050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:23:33.079976  648050 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:23:33.080082  648050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:23:33.118032  648050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:23:33.118067  648050 start.go:495] detecting cgroup driver to use...
	I0919 23:23:33.118104  648050 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:23:33.118248  648050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:23:33.140405  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:23:33.153733  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:23:33.164734  648050 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:23:33.164792  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:23:33.182842  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:23:33.194930  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:23:33.206472  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:23:33.217390  648050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:23:33.227706  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:23:33.239754  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:23:33.252911  648050 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:23:33.264761  648050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:23:33.274725  648050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:23:33.284482  648050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:33.368573  648050 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:23:33.457299  648050 start.go:495] detecting cgroup driver to use...
	I0919 23:23:33.457354  648050 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:23:33.457407  648050 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:23:33.473127  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:23:33.486683  648050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:23:33.505668  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:23:33.518683  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:23:33.534080  648050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:23:33.553033  648050 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:23:33.557417  648050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:23:33.568317  648050 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:23:33.588824  648050 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:23:33.673719  648050 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:23:33.751173  648050 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:23:33.751301  648050 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:23:33.771189  648050 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:23:33.784214  648050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:33.875656  648050 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:23:36.597404  648050 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.721705953s)
	I0919 23:23:36.597493  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:23:36.613008  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:23:36.631554  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:23:36.652921  648050 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:23:36.756712  648050 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:23:36.839765  648050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:36.917014  648050 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:23:36.941205  648050 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:23:36.954749  648050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:37.030749  648050 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:23:37.109445  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:23:37.123279  648050 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:23:37.123349  648050 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:23:37.127245  648050 start.go:563] Will wait 60s for crictl version
	I0919 23:23:37.127291  648050 ssh_runner.go:195] Run: which crictl
	I0919 23:23:37.130813  648050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:23:37.166143  648050 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:23:37.166213  648050 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:23:37.192935  648050 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:23:32.699266  651681 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:23:32.699558  651681 start.go:159] libmachine.API.Create for "default-k8s-diff-port-485703" (driver="docker")
	I0919 23:23:32.699602  651681 client.go:168] LocalClient.Create starting
	I0919 23:23:32.699748  651681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem
	I0919 23:23:32.699796  651681 main.go:141] libmachine: Decoding PEM data...
	I0919 23:23:32.699822  651681 main.go:141] libmachine: Parsing certificate...
	I0919 23:23:32.699898  651681 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem
	I0919 23:23:32.699925  651681 main.go:141] libmachine: Decoding PEM data...
	I0919 23:23:32.699949  651681 main.go:141] libmachine: Parsing certificate...
	I0919 23:23:32.700370  651681 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-485703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:23:32.717809  651681 cli_runner.go:211] docker network inspect default-k8s-diff-port-485703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:23:32.717889  651681 network_create.go:284] running [docker network inspect default-k8s-diff-port-485703] to gather additional debugging logs...
	I0919 23:23:32.717911  651681 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-485703
	W0919 23:23:32.735681  651681 cli_runner.go:211] docker network inspect default-k8s-diff-port-485703 returned with exit code 1
	I0919 23:23:32.735712  651681 network_create.go:287] error running [docker network inspect default-k8s-diff-port-485703]: docker network inspect default-k8s-diff-port-485703: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-485703 not found
	I0919 23:23:32.735731  651681 network_create.go:289] output of [docker network inspect default-k8s-diff-port-485703]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-485703 not found
	
	** /stderr **
	I0919 23:23:32.735866  651681 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:23:32.754863  651681 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db7021220859 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:a3:92:23:56:8a} reservation:<nil>}
	I0919 23:23:32.755512  651681 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-683ec4c6685e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:06:9d:60:92:e5:85} reservation:<nil>}
	I0919 23:23:32.756262  651681 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b9a40fa74e58 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:96:8a:56:fb:db:9d} reservation:<nil>}
	I0919 23:23:32.756878  651681 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c94fc5e439b0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:2d:f2:c8:3c:6b} reservation:<nil>}
	I0919 23:23:32.757735  651681 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce7130}
	I0919 23:23:32.757759  651681 network_create.go:124] attempt to create docker network default-k8s-diff-port-485703 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0919 23:23:32.757798  651681 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-485703 default-k8s-diff-port-485703
	I0919 23:23:32.815191  651681 network_create.go:108] docker network default-k8s-diff-port-485703 192.168.85.0/24 created
	I0919 23:23:32.815223  651681 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-485703" container
	I0919 23:23:32.815283  651681 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:23:32.835219  651681 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-485703 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-485703 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:23:32.854987  651681 oci.go:103] Successfully created a docker volume default-k8s-diff-port-485703
	I0919 23:23:32.855078  651681 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-485703-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-485703 --entrypoint /usr/bin/test -v default-k8s-diff-port-485703:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:23:33.229738  651681 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-485703
	I0919 23:23:33.229786  651681 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:23:33.229813  651681 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:23:33.229867  651681 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-485703:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 23:23:35.987571  651681 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-485703:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (2.757492174s)
	I0919 23:23:35.987612  651681 kic.go:203] duration metric: took 2.75779744s to extract preloaded images to volume ...
	W0919 23:23:35.987725  651681 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:23:35.987754  651681 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:23:35.987800  651681 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:23:36.043098  651681 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-485703 --name default-k8s-diff-port-485703 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-485703 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-485703 --network default-k8s-diff-port-485703 --ip 192.168.85.2 --volume default-k8s-diff-port-485703:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:23:36.391784  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Running}}
	I0919 23:23:36.411669  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:23:36.432263  651681 cli_runner.go:164] Run: docker exec default-k8s-diff-port-485703 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:23:36.481401  651681 oci.go:144] the created container "default-k8s-diff-port-485703" has a running status.
	I0919 23:23:36.481435  651681 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa...
	I0919 23:23:36.619300  651681 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:23:36.654476  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:23:36.678670  651681 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:23:36.678697  651681 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-485703 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:23:36.732145  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:23:36.754713  651681 machine.go:93] provisionDockerMachine start ...
	I0919 23:23:36.754798  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:36.777152  651681 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:36.777522  651681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0919 23:23:36.777558  651681 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:23:36.923493  651681 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485703
	
	I0919 23:23:36.923536  651681 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-485703"
	I0919 23:23:36.923600  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:36.943802  651681 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:36.944106  651681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0919 23:23:36.944135  651681 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-485703 && echo "default-k8s-diff-port-485703" | sudo tee /etc/hostname
	I0919 23:23:37.098949  651681 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485703
	
	I0919 23:23:37.099034  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:37.119556  651681 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:37.119795  651681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0919 23:23:37.119818  651681 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-485703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-485703/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-485703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:23:37.257989  651681 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:23:37.258074  651681 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:23:37.258103  651681 ubuntu.go:190] setting up certificates
	I0919 23:23:37.258121  651681 provision.go:84] configureAuth start
	I0919 23:23:37.258182  651681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:23:37.277998  651681 provision.go:143] copyHostCerts
	I0919 23:23:37.278060  651681 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:23:37.278075  651681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:23:37.278160  651681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:23:37.278279  651681 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:23:37.278291  651681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:23:37.278332  651681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:23:37.278411  651681 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:23:37.278421  651681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:23:37.278458  651681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:23:37.278554  651681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-485703 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-485703 localhost minikube]
	I0919 23:23:37.431357  651681 provision.go:177] copyRemoteCerts
	I0919 23:23:37.431415  651681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:23:37.431470  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:37.451526  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	W0919 23:23:34.945046  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:36.946315  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:34.742996  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:36.746144  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:23:37.221837  648050 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:23:37.221918  648050 cli_runner.go:164] Run: docker network inspect embed-certs-253767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:23:37.237904  648050 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:23:37.242687  648050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:23:37.254533  648050 kubeadm.go:875] updating cluster {Name:embed-certs-253767 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:23:37.254693  648050 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:23:37.254759  648050 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:23:37.277746  648050 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:23:37.277770  648050 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:23:37.277834  648050 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:23:37.301019  648050 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:23:37.301056  648050 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:23:37.301068  648050 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0919 23:23:37.301186  648050 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-253767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:23:37.301260  648050 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:23:37.355151  648050 cni.go:84] Creating CNI manager for ""
	I0919 23:23:37.355213  648050 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:23:37.355230  648050 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:23:37.355256  648050 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253767 NodeName:embed-certs-253767 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:23:37.355382  648050 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-253767"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:23:37.355440  648050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:23:37.365786  648050 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:23:37.365907  648050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:23:37.376099  648050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 23:23:37.395016  648050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:23:37.415074  648050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0919 23:23:37.435169  648050 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:23:37.439296  648050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:23:37.453867  648050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:37.529796  648050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:23:37.553729  648050 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767 for IP: 192.168.94.2
	I0919 23:23:37.553751  648050 certs.go:194] generating shared ca certs ...
	I0919 23:23:37.553770  648050 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:37.553947  648050 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:23:37.553996  648050 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:23:37.554009  648050 certs.go:256] generating profile certs ...
	I0919 23:23:37.554077  648050 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.key
	I0919 23:23:37.554097  648050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.crt with IP's: []
	I0919 23:23:37.981259  648050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.crt ...
	I0919 23:23:37.981287  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.crt: {Name:mked83968b2b1160587e45c17cb98ad7469c92c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:37.981481  648050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.key ...
	I0919 23:23:37.981513  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.key: {Name:mkfa00fea93f2a67a5d6320e8cb8581c639248d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:37.981658  648050 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key.590657ca
	I0919 23:23:37.981684  648050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt.590657ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0919 23:23:38.163099  648050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt.590657ca ...
	I0919 23:23:38.163142  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt.590657ca: {Name:mk10b4f7ad734a4bd0863292255000c30bca8a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:38.163374  648050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key.590657ca ...
	I0919 23:23:38.163403  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key.590657ca: {Name:mk400895703e7fc32b9f562c7e23e2ac6638175b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:38.163538  648050 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt.590657ca -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt
	I0919 23:23:38.163657  648050 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key.590657ca -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key
	I0919 23:23:38.163747  648050 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key
	I0919 23:23:38.163766  648050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.crt with IP's: []
	I0919 23:23:38.269620  648050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.crt ...
	I0919 23:23:38.269655  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.crt: {Name:mkd34b6b43ab8219aa5a6731ac08a8830ed938b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:38.269838  648050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key ...
	I0919 23:23:38.269862  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key: {Name:mkbeee1bc327abefa9a931997e3b405281f58fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:38.270088  648050 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:23:38.270140  648050 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:23:38.270156  648050 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:23:38.270193  648050 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:23:38.270230  648050 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:23:38.270262  648050 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:23:38.270317  648050 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:23:38.271067  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:23:38.300473  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:23:38.332054  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:23:38.358751  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:23:38.388578  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0919 23:23:38.414994  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:23:38.441092  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:23:38.471275  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:23:38.500986  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:23:38.531487  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:23:38.558102  648050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:23:38.584035  648050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:23:38.602609  648050 ssh_runner.go:195] Run: openssl version
	I0919 23:23:38.608587  648050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:23:38.619466  648050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:23:38.623299  648050 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:23:38.623343  648050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:23:38.630474  648050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:23:38.640967  648050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:23:38.651169  648050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:23:38.655032  648050 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:23:38.655086  648050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:23:38.662226  648050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:23:38.671947  648050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:23:38.682104  648050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:23:38.686150  648050 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:23:38.686200  648050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:23:38.693249  648050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:23:38.703131  648050 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:23:38.706668  648050 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:23:38.706728  648050 kubeadm.go:392] StartCluster: {Name:embed-certs-253767 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:23:38.706861  648050 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:23:38.727374  648050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:23:38.736848  648050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:23:38.747875  648050 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:23:38.747937  648050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:23:38.757863  648050 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:23:38.757887  648050 kubeadm.go:157] found existing configuration files:
	
	I0919 23:23:38.757935  648050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:23:38.767422  648050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:23:38.767486  648050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:23:38.776426  648050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:23:38.786315  648050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:23:38.786374  648050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:23:38.796054  648050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:23:38.805015  648050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:23:38.805077  648050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:23:38.813951  648050 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:23:38.823137  648050 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:23:38.823211  648050 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:23:38.834401  648050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:23:38.902090  648050 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:23:38.962114  648050 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:23:37.550319  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:23:37.581462  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0919 23:23:37.610058  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:23:37.635953  651681 provision.go:87] duration metric: took 377.810348ms to configureAuth
	I0919 23:23:37.635987  651681 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:23:37.636186  651681 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:23:37.636271  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:37.655114  651681 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:37.655338  651681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0919 23:23:37.655351  651681 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:23:37.793237  651681 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:23:37.793266  651681 ubuntu.go:71] root file system type: overlay
	I0919 23:23:37.793386  651681 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:23:37.793449  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:37.811063  651681 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:37.811370  651681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0919 23:23:37.811476  651681 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:23:37.965004  651681 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:23:37.965137  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:37.984919  651681 main.go:141] libmachine: Using SSH client type: native
	I0919 23:23:37.985218  651681 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I0919 23:23:37.985242  651681 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:23:39.112756  651681 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-19 23:23:37.963118220 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0919 23:23:39.112798  651681 machine.go:96] duration metric: took 2.358062192s to provisionDockerMachine
	I0919 23:23:39.112815  651681 client.go:171] duration metric: took 6.413201611s to LocalClient.Create
	I0919 23:23:39.112838  651681 start.go:167] duration metric: took 6.413282201s to libmachine.API.Create "default-k8s-diff-port-485703"
	I0919 23:23:39.112851  651681 start.go:293] postStartSetup for "default-k8s-diff-port-485703" (driver="docker")
	I0919 23:23:39.112875  651681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:23:39.112951  651681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:23:39.113001  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:39.131331  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:23:39.232387  651681 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:23:39.236476  651681 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:23:39.236547  651681 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:23:39.236572  651681 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:23:39.236579  651681 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:23:39.236591  651681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:23:39.236638  651681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:23:39.236724  651681 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:23:39.236838  651681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:23:39.248478  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:23:39.277585  651681 start.go:296] duration metric: took 164.718232ms for postStartSetup
	I0919 23:23:39.277937  651681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:23:39.295493  651681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json ...
	I0919 23:23:39.295761  651681 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:23:39.295816  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:39.313231  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:23:39.406239  651681 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:23:39.411264  651681 start.go:128] duration metric: took 6.714023152s to createHost
	I0919 23:23:39.411293  651681 start.go:83] releasing machines lock for "default-k8s-diff-port-485703", held for 6.714164696s
	I0919 23:23:39.411355  651681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:23:39.429974  651681 ssh_runner.go:195] Run: cat /version.json
	I0919 23:23:39.430038  651681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:23:39.430049  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:39.430101  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:39.449441  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:23:39.450717  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:23:39.614787  651681 ssh_runner.go:195] Run: systemctl --version
	I0919 23:23:39.620087  651681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:23:39.624806  651681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:23:39.654888  651681 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:23:39.654979  651681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:23:39.683788  651681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:23:39.683817  651681 start.go:495] detecting cgroup driver to use...
	I0919 23:23:39.683857  651681 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:23:39.683971  651681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:23:39.701506  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:23:39.713474  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:23:39.724698  651681 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:23:39.724764  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:23:39.736190  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:23:39.748539  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:23:39.759258  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:23:39.770229  651681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:23:39.780019  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:23:39.790335  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:23:39.800651  651681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:23:39.811494  651681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:23:39.820287  651681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:23:39.830117  651681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:39.911920  651681 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:23:40.007522  651681 start.go:495] detecting cgroup driver to use...
	I0919 23:23:40.007575  651681 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:23:40.007625  651681 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:23:40.021950  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:23:40.033837  651681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:23:40.054336  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:23:40.066709  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:23:40.078567  651681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:23:40.095891  651681 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:23:40.099833  651681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:23:40.110586  651681 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:23:40.129466  651681 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:23:40.210492  651681 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:23:40.290916  651681 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:23:40.291035  651681 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:23:40.310334  651681 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:23:40.321897  651681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:40.393413  651681 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:23:41.179469  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:23:41.192277  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:23:41.205014  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:23:41.217780  651681 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:23:41.293373  651681 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:23:41.364432  651681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:41.435065  651681 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:23:41.457155  651681 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:23:41.470687  651681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:41.545311  651681 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:23:41.625133  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:23:41.637979  651681 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:23:41.638059  651681 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:23:41.641886  651681 start.go:563] Will wait 60s for crictl version
	I0919 23:23:41.641939  651681 ssh_runner.go:195] Run: which crictl
	I0919 23:23:41.645522  651681 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:23:41.681912  651681 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:23:41.681982  651681 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:23:41.709314  651681 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:23:41.737119  651681 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:23:41.737231  651681 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-485703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:23:41.756043  651681 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0919 23:23:41.760216  651681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:23:41.772568  651681 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:23:41.772718  651681 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:23:41.772779  651681 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:23:41.795119  651681 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:23:41.795156  651681 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:23:41.795237  651681 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:23:41.816918  651681 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 23:23:41.816945  651681 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:23:41.816958  651681 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.0 docker true true} ...
	I0919 23:23:41.817062  651681 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-485703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:23:41.817121  651681 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:23:41.874097  651681 cni.go:84] Creating CNI manager for ""
	I0919 23:23:41.874135  651681 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:23:41.874148  651681 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:23:41.874168  651681 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-485703 NodeName:default-k8s-diff-port-485703 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:23:41.874302  651681 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-485703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:23:41.874360  651681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:23:41.885392  651681 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:23:41.885469  651681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:23:41.894944  651681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0919 23:23:41.914234  651681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:23:41.933475  651681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0919 23:23:41.954032  651681 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:23:41.957993  651681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:23:41.969974  651681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:42.040224  651681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:23:42.063125  651681 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703 for IP: 192.168.85.2
	I0919 23:23:42.063148  651681 certs.go:194] generating shared ca certs ...
	I0919 23:23:42.063169  651681 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.063350  651681 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:23:42.063390  651681 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:23:42.063401  651681 certs.go:256] generating profile certs ...
	I0919 23:23:42.063545  651681 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.key
	I0919 23:23:42.063566  651681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.crt with IP's: []
	I0919 23:23:42.437702  651681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.crt ...
	I0919 23:23:42.437736  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.crt: {Name:mk92d1c17084edd324fe3e4cfbc55206bc5f0eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.437945  651681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.key ...
	I0919 23:23:42.437965  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.key: {Name:mked3f2a354fdd33d916a15afdf92d441757f50f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.438085  651681 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key.66b5ce16
	I0919 23:23:42.438107  651681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt.66b5ce16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	W0919 23:23:39.445666  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:41.945070  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:23:42.750049  651681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt.66b5ce16 ...
	I0919 23:23:42.750077  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt.66b5ce16: {Name:mk17fe77a0d4160d1443f47ab65cae9bedc0ec6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.750251  651681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key.66b5ce16 ...
	I0919 23:23:42.750265  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key.66b5ce16: {Name:mke774a4c1acd82917a96407738832f9f8bdae1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.750337  651681 certs.go:381] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt.66b5ce16 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt
	I0919 23:23:42.750411  651681 certs.go:385] copying /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key.66b5ce16 -> /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key
	I0919 23:23:42.750466  651681 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key
	I0919 23:23:42.750482  651681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.crt with IP's: []
	I0919 23:23:42.956632  651681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.crt ...
	I0919 23:23:42.956658  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.crt: {Name:mkc230b5676f86eda78ade3045658dc57473ec41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.956842  651681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key ...
	I0919 23:23:42.956861  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key: {Name:mk0242bb08424e77a2c0bd16bfe13f81d2035c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:42.957103  651681 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:23:42.957162  651681 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:23:42.957177  651681 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:23:42.957213  651681 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:23:42.957250  651681 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:23:42.957282  651681 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:23:42.957335  651681 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:23:42.957972  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:23:42.988698  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:23:43.015910  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:23:43.041040  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:23:43.070431  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:23:43.103608  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:23:43.131471  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:23:43.155986  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:23:43.180324  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:23:43.210109  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:23:43.238173  651681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:23:43.263945  651681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:23:43.282395  651681 ssh_runner.go:195] Run: openssl version
	I0919 23:23:43.288336  651681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:23:43.298439  651681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:23:43.302119  651681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:23:43.302188  651681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:23:43.309067  651681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:23:43.318885  651681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:23:43.328677  651681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:23:43.332241  651681 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:23:43.332296  651681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:23:43.339360  651681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:23:43.348955  651681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:23:43.358490  651681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:23:43.361866  651681 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:23:43.361907  651681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:23:43.369104  651681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:23:43.379447  651681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:23:43.382853  651681 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:23:43.382906  651681 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:23:43.383029  651681 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:23:43.402492  651681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:23:43.412814  651681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:23:43.422059  651681 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:23:43.422113  651681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:23:43.431763  651681 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:23:43.431797  651681 kubeadm.go:157] found existing configuration files:
	
	I0919 23:23:43.431850  651681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0919 23:23:43.440446  651681 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:23:43.440524  651681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:23:43.450343  651681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0919 23:23:43.459474  651681 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:23:43.459555  651681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:23:43.469635  651681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0919 23:23:43.479120  651681 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:23:43.479169  651681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:23:43.488551  651681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0919 23:23:43.497925  651681 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:23:43.497971  651681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:23:43.507019  651681 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:23:43.554999  651681 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:23:43.555079  651681 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:23:43.577945  651681 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:23:43.578038  651681 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:23:43.578084  651681 kubeadm.go:310] OS: Linux
	I0919 23:23:43.578145  651681 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:23:43.578202  651681 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:23:43.578251  651681 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:23:43.578315  651681 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:23:43.578377  651681 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:23:43.578464  651681 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:23:43.578543  651681 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:23:43.578604  651681 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:23:43.659766  651681 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:23:43.660274  651681 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:23:43.660610  651681 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:23:43.685388  651681 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0919 23:23:39.244057  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:41.743807  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:23:43.687531  651681 out.go:252]   - Generating certificates and keys ...
	I0919 23:23:43.687674  651681 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:23:43.687914  651681 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:23:43.920209  651681 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:23:45.037527  651681 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:23:45.242421  651681 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:23:45.430187  651681 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:23:45.663810  651681 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:23:45.664018  651681 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-485703 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:23:46.014420  651681 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:23:46.014659  651681 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-485703 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0919 23:23:46.434300  651681 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:23:46.874818  651681 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:23:47.143981  651681 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:23:47.144210  651681 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:23:47.193812  651681 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:23:47.260021  651681 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:23:47.487845  651681 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:23:47.741004  651681 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:23:47.924455  651681 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:23:47.925035  651681 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:23:47.929037  651681 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0919 23:23:43.946798  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:46.444274  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:48.444778  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:23:48.968887  648050 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:23:48.968966  648050 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:23:48.969080  648050 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:23:48.969170  648050 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:23:48.969229  648050 kubeadm.go:310] OS: Linux
	I0919 23:23:48.969294  648050 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:23:48.969366  648050 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:23:48.969444  648050 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:23:48.969540  648050 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:23:48.969617  648050 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:23:48.969673  648050 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:23:48.969717  648050 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:23:48.969753  648050 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:23:48.969856  648050 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:23:48.970024  648050 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:23:48.970156  648050 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:23:48.970247  648050 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:23:48.971842  648050 out.go:252]   - Generating certificates and keys ...
	I0919 23:23:48.971950  648050 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:23:48.972075  648050 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:23:48.972197  648050 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:23:48.972278  648050 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:23:48.972371  648050 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:23:48.972441  648050 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:23:48.972525  648050 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:23:48.972672  648050 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-253767 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:23:48.972756  648050 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:23:48.972909  648050 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-253767 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0919 23:23:48.972982  648050 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:23:48.973052  648050 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:23:48.973100  648050 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:23:48.973145  648050 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:23:48.973212  648050 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:23:48.973290  648050 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:23:48.973375  648050 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:23:48.973466  648050 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:23:48.973567  648050 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:23:48.973676  648050 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:23:48.973776  648050 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:23:48.975128  648050 out.go:252]   - Booting up control plane ...
	I0919 23:23:48.975236  648050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:23:48.975331  648050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:23:48.975433  648050 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:23:48.975569  648050 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:23:48.975646  648050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:23:48.975760  648050 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:23:48.975870  648050 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:23:48.975907  648050 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:23:48.976040  648050 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:23:48.976126  648050 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:23:48.976182  648050 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001757176s
	I0919 23:23:48.976260  648050 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:23:48.976353  648050 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0919 23:23:48.976482  648050 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:23:48.976614  648050 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:23:48.976701  648050 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.564599734s
	I0919 23:23:48.976759  648050 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.273824193s
	I0919 23:23:48.976818  648050 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001360523s
	I0919 23:23:48.976918  648050 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:23:48.977076  648050 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:23:48.977127  648050 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:23:48.977297  648050 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-253767 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:23:48.977375  648050 kubeadm.go:310] [bootstrap-token] Using token: htwqrk.ti1wk7jhimolmz4e
	I0919 23:23:48.978685  648050 out.go:252]   - Configuring RBAC rules ...
	I0919 23:23:48.978819  648050 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:23:48.978901  648050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:23:48.979022  648050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:23:48.979189  648050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:23:48.979347  648050 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:23:48.979462  648050 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:23:48.979622  648050 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:23:48.979699  648050 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:23:48.979764  648050 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:23:48.979773  648050 kubeadm.go:310] 
	I0919 23:23:48.979856  648050 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:23:48.979867  648050 kubeadm.go:310] 
	I0919 23:23:48.979957  648050 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:23:48.979967  648050 kubeadm.go:310] 
	I0919 23:23:48.979989  648050 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:23:48.980044  648050 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:23:48.980117  648050 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:23:48.980136  648050 kubeadm.go:310] 
	I0919 23:23:48.980214  648050 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:23:48.980229  648050 kubeadm.go:310] 
	I0919 23:23:48.980301  648050 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:23:48.980315  648050 kubeadm.go:310] 
	I0919 23:23:48.980392  648050 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:23:48.980527  648050 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:23:48.980632  648050 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:23:48.980643  648050 kubeadm.go:310] 
	I0919 23:23:48.980748  648050 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:23:48.980850  648050 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:23:48.980861  648050 kubeadm.go:310] 
	I0919 23:23:48.980961  648050 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token htwqrk.ti1wk7jhimolmz4e \
	I0919 23:23:48.981094  648050 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 23:23:48.981143  648050 kubeadm.go:310] 	--control-plane 
	I0919 23:23:48.981151  648050 kubeadm.go:310] 
	I0919 23:23:48.981282  648050 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:23:48.981294  648050 kubeadm.go:310] 
	I0919 23:23:48.981393  648050 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token htwqrk.ti1wk7jhimolmz4e \
	I0919 23:23:48.981584  648050 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 23:23:48.981599  648050 cni.go:84] Creating CNI manager for ""
	I0919 23:23:48.981618  648050 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:23:48.983037  648050 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W0919 23:23:44.244059  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:46.742980  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:23:48.984131  648050 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:23:48.994787  648050 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:23:49.016482  648050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:23:49.016564  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:49.016583  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-253767 minikube.k8s.io/updated_at=2025_09_19T23_23_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=embed-certs-253767 minikube.k8s.io/primary=true
	I0919 23:23:49.025247  648050 ops.go:34] apiserver oom_adj: -16
	I0919 23:23:49.097875  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:47.930539  651681 out.go:252]   - Booting up control plane ...
	I0919 23:23:47.930690  651681 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:23:47.930804  651681 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:23:47.932666  651681 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:23:47.943452  651681 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:23:47.943622  651681 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:23:47.949745  651681 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:23:47.950096  651681 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:23:47.950164  651681 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:23:48.052776  651681 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:23:48.052893  651681 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:23:49.553762  651681 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501023909s
	I0919 23:23:49.557244  651681 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:23:49.557381  651681 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I0919 23:23:49.557578  651681 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:23:49.557709  651681 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:23:51.131700  651681 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.574360898s
	I0919 23:23:52.021803  651681 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.464530624s
	W0919 23:23:50.445605  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:52.944464  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:23:53.559367  651681 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002046631s
	I0919 23:23:53.572526  651681 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:23:53.582579  651681 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:23:53.590957  651681 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:23:53.591323  651681 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-485703 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:23:53.599290  651681 kubeadm.go:310] [bootstrap-token] Using token: xrr879.0vdevmspoo5ze10d
	I0919 23:23:49.598592  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:50.098490  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:50.598048  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:51.098777  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:51.598576  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:52.098060  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:52.598724  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:53.098622  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:53.598716  648050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:53.675611  648050 kubeadm.go:1105] duration metric: took 4.659114346s to wait for elevateKubeSystemPrivileges
	I0919 23:23:53.675651  648050 kubeadm.go:394] duration metric: took 14.96892921s to StartCluster
	I0919 23:23:53.675675  648050 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:53.675748  648050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:23:53.677252  648050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:53.677481  648050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:23:53.677486  648050 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:23:53.677615  648050 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:23:53.677719  648050 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:23:53.677729  648050 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253767"
	I0919 23:23:53.677743  648050 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253767"
	I0919 23:23:53.677774  648050 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-253767"
	I0919 23:23:53.677779  648050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253767"
	I0919 23:23:53.677810  648050 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:23:53.678149  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:23:53.678340  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:23:53.678952  648050 out.go:179] * Verifying Kubernetes components...
	I0919 23:23:53.680092  648050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:53.702963  648050 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0919 23:23:49.246077  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:51.743319  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:53.744628  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:23:53.702961  648050 addons.go:238] Setting addon default-storageclass=true in "embed-certs-253767"
	I0919 23:23:53.703066  648050 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:23:53.703619  648050 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:23:53.704679  648050 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:23:53.704702  648050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:23:53.704757  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:53.732955  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:53.735025  648050 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:23:53.735052  648050 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:23:53.735119  648050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:23:53.758601  648050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33137 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:23:53.783848  648050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:23:53.819465  648050 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:23:53.850803  648050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:23:53.877913  648050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:23:53.972628  648050 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0919 23:23:53.974160  648050 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253767" to be "Ready" ...
	I0919 23:23:53.985633  648050 node_ready.go:49] node "embed-certs-253767" is "Ready"
	I0919 23:23:53.985664  648050 node_ready.go:38] duration metric: took 11.461317ms for node "embed-certs-253767" to be "Ready" ...
	I0919 23:23:53.985686  648050 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:23:53.985736  648050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:23:54.230252  648050 api_server.go:72] duration metric: took 552.692713ms to wait for apiserver process to appear ...
	I0919 23:23:54.230333  648050 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:23:54.230382  648050 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:23:54.237562  648050 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0919 23:23:54.238711  648050 api_server.go:141] control plane version: v1.34.0
	I0919 23:23:54.238751  648050 api_server.go:131] duration metric: took 8.397976ms to wait for apiserver health ...
	I0919 23:23:54.238762  648050 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:23:54.239937  648050 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:23:54.241591  648050 addons.go:514] duration metric: took 563.975216ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:23:54.242820  648050 system_pods.go:59] 8 kube-system pods found
	I0919 23:23:54.242864  648050 system_pods.go:61] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.242886  648050 system_pods.go:61] "coredns-66bc5c9577-5gqnh" [3290e6ca-744f-4ca1-bb92-b727218a9d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.242896  648050 system_pods.go:61] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:23:54.242906  648050 system_pods.go:61] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:23:54.242919  648050 system_pods.go:61] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:23:54.242935  648050 system_pods.go:61] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:23:54.242948  648050 system_pods.go:61] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:23:54.242961  648050 system_pods.go:61] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:23:54.242969  648050 system_pods.go:74] duration metric: took 4.199911ms to wait for pod list to return data ...
	I0919 23:23:54.242980  648050 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:23:54.245457  648050 default_sa.go:45] found service account: "default"
	I0919 23:23:54.245480  648050 default_sa.go:55] duration metric: took 2.48878ms for default service account to be created ...
	I0919 23:23:54.245491  648050 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:23:54.248885  648050 system_pods.go:86] 8 kube-system pods found
	I0919 23:23:54.248926  648050 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.248937  648050 system_pods.go:89] "coredns-66bc5c9577-5gqnh" [3290e6ca-744f-4ca1-bb92-b727218a9d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.248948  648050 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:23:54.248957  648050 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:23:54.248967  648050 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:23:54.248982  648050 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:23:54.248993  648050 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:23:54.249000  648050 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:23:54.249029  648050 retry.go:31] will retry after 194.997687ms: missing components: kube-dns, kube-proxy
	I0919 23:23:54.451068  648050 system_pods.go:86] 8 kube-system pods found
	I0919 23:23:54.451110  648050 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.451120  648050 system_pods.go:89] "coredns-66bc5c9577-5gqnh" [3290e6ca-744f-4ca1-bb92-b727218a9d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.451130  648050 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:23:54.451141  648050 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:23:54.451147  648050 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running
	I0919 23:23:54.451156  648050 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:23:54.451163  648050 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:23:54.451171  648050 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:23:54.451192  648050 retry.go:31] will retry after 365.969705ms: missing components: kube-dns, kube-proxy
	I0919 23:23:54.476458  648050 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-253767" context rescaled to 1 replicas
	I0919 23:23:53.600952  651681 out.go:252]   - Configuring RBAC rules ...
	I0919 23:23:53.601125  651681 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:23:53.604213  651681 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:23:53.609700  651681 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:23:53.612370  651681 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:23:53.616167  651681 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:23:53.618766  651681 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:23:53.967940  651681 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:23:54.388205  651681 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:23:54.965225  651681 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:23:54.966103  651681 kubeadm.go:310] 
	I0919 23:23:54.966163  651681 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:23:54.966172  651681 kubeadm.go:310] 
	I0919 23:23:54.966237  651681 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:23:54.966244  651681 kubeadm.go:310] 
	I0919 23:23:54.966264  651681 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:23:54.966312  651681 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:23:54.966374  651681 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:23:54.966388  651681 kubeadm.go:310] 
	I0919 23:23:54.966436  651681 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:23:54.966458  651681 kubeadm.go:310] 
	I0919 23:23:54.966546  651681 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:23:54.966557  651681 kubeadm.go:310] 
	I0919 23:23:54.966612  651681 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:23:54.966675  651681 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:23:54.966729  651681 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:23:54.966752  651681 kubeadm.go:310] 
	I0919 23:23:54.966867  651681 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:23:54.966977  651681 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:23:54.966988  651681 kubeadm.go:310] 
	I0919 23:23:54.967121  651681 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xrr879.0vdevmspoo5ze10d \
	I0919 23:23:54.967274  651681 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a \
	I0919 23:23:54.967305  651681 kubeadm.go:310] 	--control-plane 
	I0919 23:23:54.967322  651681 kubeadm.go:310] 
	I0919 23:23:54.967437  651681 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:23:54.967452  651681 kubeadm.go:310] 
	I0919 23:23:54.967572  651681 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xrr879.0vdevmspoo5ze10d \
	I0919 23:23:54.967669  651681 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e34938835ca5de20dcd743043ff221a1493ef970b34561f39a513839570935a 
	I0919 23:23:54.971093  651681 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:23:54.971226  651681 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:23:54.971264  651681 cni.go:84] Creating CNI manager for ""
	I0919 23:23:54.971285  651681 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:23:54.973237  651681 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 23:23:54.974372  651681 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:23:54.985066  651681 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:23:55.005825  651681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:23:55.005899  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:55.005917  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-485703 minikube.k8s.io/updated_at=2025_09_19T23_23_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=default-k8s-diff-port-485703 minikube.k8s.io/primary=true
	I0919 23:23:55.091895  651681 ops.go:34] apiserver oom_adj: -16
	I0919 23:23:55.092021  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:55.592722  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:56.092454  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:56.592127  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:57.092206  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0919 23:23:54.944755  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:57.444603  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:23:56.242326  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:58.243024  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:23:54.822132  648050 system_pods.go:86] 8 kube-system pods found
	I0919 23:23:54.822173  648050 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.822185  648050 system_pods.go:89] "coredns-66bc5c9577-5gqnh" [3290e6ca-744f-4ca1-bb92-b727218a9d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:54.822194  648050 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:23:54.822202  648050 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:23:54.822238  648050 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running
	I0919 23:23:54.822255  648050 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:23:54.822261  648050 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running
	I0919 23:23:54.822275  648050 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:23:54.822298  648050 retry.go:31] will retry after 473.137057ms: missing components: kube-dns, kube-proxy
	I0919 23:23:55.299615  648050 system_pods.go:86] 8 kube-system pods found
	I0919 23:23:55.299661  648050 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:55.299673  648050 system_pods.go:89] "coredns-66bc5c9577-5gqnh" [3290e6ca-744f-4ca1-bb92-b727218a9d5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:55.299693  648050 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:23:55.299703  648050 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:23:55.299710  648050 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running
	I0919 23:23:55.299717  648050 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:23:55.299725  648050 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running
	I0919 23:23:55.299734  648050 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:23:55.299757  648050 retry.go:31] will retry after 385.066244ms: missing components: kube-dns, kube-proxy
	I0919 23:23:55.689058  648050 system_pods.go:86] 7 kube-system pods found
	I0919 23:23:55.689091  648050 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:23:55.689101  648050 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:23:55.689110  648050 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:23:55.689114  648050 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running
	I0919 23:23:55.689119  648050 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:23:55.689124  648050 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running
	I0919 23:23:55.689128  648050 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Running
	I0919 23:23:55.689135  648050 system_pods.go:126] duration metric: took 1.443605544s to wait for k8s-apps to be running ...
	I0919 23:23:55.689145  648050 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:23:55.689190  648050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:23:55.702070  648050 system_svc.go:56] duration metric: took 12.91325ms WaitForService to wait for kubelet
	I0919 23:23:55.702098  648050 kubeadm.go:578] duration metric: took 2.02454715s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:23:55.702119  648050 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:23:55.704975  648050 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:23:55.705003  648050 node_conditions.go:123] node cpu capacity is 8
	I0919 23:23:55.705022  648050 node_conditions.go:105] duration metric: took 2.897264ms to run NodePressure ...
	I0919 23:23:55.705038  648050 start.go:241] waiting for startup goroutines ...
	I0919 23:23:55.705049  648050 start.go:246] waiting for cluster config update ...
	I0919 23:23:55.705067  648050 start.go:255] writing updated cluster config ...
	I0919 23:23:55.705381  648050 ssh_runner.go:195] Run: rm -f paused
	I0919 23:23:55.709423  648050 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:23:55.713400  648050 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tv82" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:23:57.719147  648050 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	I0919 23:23:57.592490  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:58.092541  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:58.592598  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:59.092271  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:59.592186  651681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:59.661627  651681 kubeadm.go:1105] duration metric: took 4.655789695s to wait for elevateKubeSystemPrivileges
	I0919 23:23:59.661678  651681 kubeadm.go:394] duration metric: took 16.278777616s to StartCluster
	I0919 23:23:59.661704  651681 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:59.661796  651681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:23:59.664567  651681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:23:59.664871  651681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:23:59.664885  651681 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:23:59.664941  651681 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:23:59.665050  651681 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-485703"
	I0919 23:23:59.665083  651681 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-485703"
	I0919 23:23:59.665103  651681 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:23:59.665116  651681 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:23:59.665117  651681 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-485703"
	I0919 23:23:59.665183  651681 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-485703"
	I0919 23:23:59.665666  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:23:59.665711  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:23:59.667587  651681 out.go:179] * Verifying Kubernetes components...
	I0919 23:23:59.668963  651681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:23:59.690012  651681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:23:59.690318  651681 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-485703"
	I0919 23:23:59.690366  651681 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:23:59.690878  651681 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:23:59.691335  651681 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:23:59.691355  651681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:23:59.691405  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:59.716076  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:23:59.725170  651681 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:23:59.725212  651681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:23:59.725283  651681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:23:59.747278  651681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:23:59.760404  651681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:23:59.812003  651681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:23:59.838792  651681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:23:59.864997  651681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:23:59.966681  651681 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0919 23:23:59.968983  651681 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-485703" to be "Ready" ...
	I0919 23:23:59.978282  651681 node_ready.go:49] node "default-k8s-diff-port-485703" is "Ready"
	I0919 23:23:59.978317  651681 node_ready.go:38] duration metric: took 9.287842ms for node "default-k8s-diff-port-485703" to be "Ready" ...
	I0919 23:23:59.978339  651681 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:23:59.978392  651681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:24:00.161965  651681 api_server.go:72] duration metric: took 497.039981ms to wait for apiserver process to appear ...
	I0919 23:24:00.161994  651681 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:24:00.162017  651681 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0919 23:24:00.167907  651681 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0919 23:24:00.168891  651681 api_server.go:141] control plane version: v1.34.0
	I0919 23:24:00.168918  651681 api_server.go:131] duration metric: took 6.916938ms to wait for apiserver health ...
	I0919 23:24:00.168928  651681 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:24:00.169064  651681 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:24:00.171115  651681 addons.go:514] duration metric: took 506.171761ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:24:00.173982  651681 system_pods.go:59] 6 kube-system pods found
	I0919 23:24:00.174011  651681 system_pods.go:61] "etcd-default-k8s-diff-port-485703" [02beb40a-b479-4d90-8571-9cd121164a34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:24:00.174018  651681 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-485703" [a0ceefb9-a01d-4f72-8be0-cfe75cdcb8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:00.174030  651681 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-485703" [f771fdea-14c9-43f1-9264-a3ca8437a5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:00.174037  651681 system_pods.go:61] "kube-proxy-422z6" [dadca44c-bde6-4995-a100-1f2444f831bd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:24:00.174045  651681 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-485703" [e68dde3d-d0ea-42a6-afc1-ac85698c9fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:00.174050  651681 system_pods.go:61] "storage-provisioner" [cbbe2f2f-0085-4143-9329-0afce410bfb6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:24:00.174057  651681 system_pods.go:74] duration metric: took 5.122366ms to wait for pod list to return data ...
	I0919 23:24:00.174067  651681 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:24:00.176105  651681 default_sa.go:45] found service account: "default"
	I0919 23:24:00.176125  651681 default_sa.go:55] duration metric: took 2.051752ms for default service account to be created ...
	I0919 23:24:00.176136  651681 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:24:00.178232  651681 system_pods.go:86] 6 kube-system pods found
	I0919 23:24:00.178259  651681 system_pods.go:89] "etcd-default-k8s-diff-port-485703" [02beb40a-b479-4d90-8571-9cd121164a34] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:24:00.178266  651681 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-485703" [a0ceefb9-a01d-4f72-8be0-cfe75cdcb8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:00.178274  651681 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-485703" [f771fdea-14c9-43f1-9264-a3ca8437a5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:00.178280  651681 system_pods.go:89] "kube-proxy-422z6" [dadca44c-bde6-4995-a100-1f2444f831bd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:24:00.178288  651681 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-485703" [e68dde3d-d0ea-42a6-afc1-ac85698c9fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:00.178294  651681 system_pods.go:89] "storage-provisioner" [cbbe2f2f-0085-4143-9329-0afce410bfb6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:24:00.178327  651681 retry.go:31] will retry after 256.574125ms: missing components: kube-dns, kube-proxy
	I0919 23:24:00.438922  651681 system_pods.go:86] 8 kube-system pods found
	I0919 23:24:00.438959  651681 system_pods.go:89] "coredns-66bc5c9577-5jn8b" [e9c319ac-7e8a-4a6d-bdb7-3022c8d7d7d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:00.438995  651681 system_pods.go:89] "coredns-66bc5c9577-p9g2c" [39f6d70b-667f-416e-a9f3-2c3807cd0df5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:00.439004  651681 system_pods.go:89] "etcd-default-k8s-diff-port-485703" [02beb40a-b479-4d90-8571-9cd121164a34] Running
	I0919 23:24:00.439039  651681 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-485703" [a0ceefb9-a01d-4f72-8be0-cfe75cdcb8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:00.439047  651681 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-485703" [f771fdea-14c9-43f1-9264-a3ca8437a5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:00.439155  651681 system_pods.go:89] "kube-proxy-422z6" [dadca44c-bde6-4995-a100-1f2444f831bd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:24:00.439164  651681 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-485703" [e68dde3d-d0ea-42a6-afc1-ac85698c9fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:00.439170  651681 system_pods.go:89] "storage-provisioner" [cbbe2f2f-0085-4143-9329-0afce410bfb6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:24:00.439188  651681 retry.go:31] will retry after 307.467209ms: missing components: kube-dns, kube-proxy
	I0919 23:24:00.471367  651681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-485703" context rescaled to 1 replicas
	I0919 23:24:00.751732  651681 system_pods.go:86] 8 kube-system pods found
	I0919 23:24:00.751773  651681 system_pods.go:89] "coredns-66bc5c9577-5jn8b" [e9c319ac-7e8a-4a6d-bdb7-3022c8d7d7d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:00.751785  651681 system_pods.go:89] "coredns-66bc5c9577-p9g2c" [39f6d70b-667f-416e-a9f3-2c3807cd0df5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:00.751793  651681 system_pods.go:89] "etcd-default-k8s-diff-port-485703" [02beb40a-b479-4d90-8571-9cd121164a34] Running
	I0919 23:24:00.751803  651681 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-485703" [a0ceefb9-a01d-4f72-8be0-cfe75cdcb8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:00.751816  651681 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-485703" [f771fdea-14c9-43f1-9264-a3ca8437a5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:00.751827  651681 system_pods.go:89] "kube-proxy-422z6" [dadca44c-bde6-4995-a100-1f2444f831bd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:24:00.751836  651681 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-485703" [e68dde3d-d0ea-42a6-afc1-ac85698c9fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:00.751845  651681 system_pods.go:89] "storage-provisioner" [cbbe2f2f-0085-4143-9329-0afce410bfb6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:24:00.751870  651681 retry.go:31] will retry after 390.646332ms: missing components: kube-dns, kube-proxy
	I0919 23:24:01.147659  651681 system_pods.go:86] 8 kube-system pods found
	I0919 23:24:01.147695  651681 system_pods.go:89] "coredns-66bc5c9577-5jn8b" [e9c319ac-7e8a-4a6d-bdb7-3022c8d7d7d1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:01.147702  651681 system_pods.go:89] "coredns-66bc5c9577-p9g2c" [39f6d70b-667f-416e-a9f3-2c3807cd0df5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:01.147707  651681 system_pods.go:89] "etcd-default-k8s-diff-port-485703" [02beb40a-b479-4d90-8571-9cd121164a34] Running
	I0919 23:24:01.147718  651681 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-485703" [a0ceefb9-a01d-4f72-8be0-cfe75cdcb8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:01.147724  651681 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-485703" [f771fdea-14c9-43f1-9264-a3ca8437a5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:01.147730  651681 system_pods.go:89] "kube-proxy-422z6" [dadca44c-bde6-4995-a100-1f2444f831bd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:24:01.147735  651681 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-485703" [e68dde3d-d0ea-42a6-afc1-ac85698c9fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:01.147740  651681 system_pods.go:89] "storage-provisioner" [cbbe2f2f-0085-4143-9329-0afce410bfb6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:24:01.147761  651681 retry.go:31] will retry after 413.054389ms: missing components: kube-dns, kube-proxy
	I0919 23:24:01.564887  651681 system_pods.go:86] 7 kube-system pods found
	I0919 23:24:01.564919  651681 system_pods.go:89] "coredns-66bc5c9577-5jn8b" [e9c319ac-7e8a-4a6d-bdb7-3022c8d7d7d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:01.564925  651681 system_pods.go:89] "etcd-default-k8s-diff-port-485703" [02beb40a-b479-4d90-8571-9cd121164a34] Running
	I0919 23:24:01.564931  651681 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-485703" [a0ceefb9-a01d-4f72-8be0-cfe75cdcb8fd] Running
	I0919 23:24:01.564937  651681 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-485703" [f771fdea-14c9-43f1-9264-a3ca8437a5cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:01.564942  651681 system_pods.go:89] "kube-proxy-422z6" [dadca44c-bde6-4995-a100-1f2444f831bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:24:01.564949  651681 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-485703" [e68dde3d-d0ea-42a6-afc1-ac85698c9fb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:01.564953  651681 system_pods.go:89] "storage-provisioner" [cbbe2f2f-0085-4143-9329-0afce410bfb6] Running
	I0919 23:24:01.564966  651681 system_pods.go:126] duration metric: took 1.388820296s to wait for k8s-apps to be running ...
	I0919 23:24:01.564980  651681 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:24:01.565036  651681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:24:01.581335  651681 system_svc.go:56] duration metric: took 16.343416ms WaitForService to wait for kubelet
	I0919 23:24:01.581371  651681 kubeadm.go:578] duration metric: took 1.916450519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:24:01.581397  651681 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:24:01.584975  651681 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:24:01.585015  651681 node_conditions.go:123] node cpu capacity is 8
	I0919 23:24:01.585034  651681 node_conditions.go:105] duration metric: took 3.629827ms to run NodePressure ...
	I0919 23:24:01.585048  651681 start.go:241] waiting for startup goroutines ...
	I0919 23:24:01.585057  651681 start.go:246] waiting for cluster config update ...
	I0919 23:24:01.585074  651681 start.go:255] writing updated cluster config ...
	I0919 23:24:01.585453  651681 ssh_runner.go:195] Run: rm -f paused
	I0919 23:24:01.589841  651681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:01.594122  651681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5jn8b" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:23:59.445059  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:01.449514  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:00.245824  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:24:02.743032  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:23:59.722781  648050 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:24:02.219746  648050 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:24:03.598655  651681 pod_ready.go:104] pod "coredns-66bc5c9577-5jn8b" is not "Ready", error: <nil>
	W0919 23:24:05.599339  651681 pod_ready.go:104] pod "coredns-66bc5c9577-5jn8b" is not "Ready", error: <nil>
	W0919 23:24:03.944093  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:05.944580  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:24:08.444129  614852 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:24:08.930587  614852 pod_ready.go:86] duration metric: took 3m46.492017742s for pod "coredns-5dd5756b68-q75nl" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:24:08.930640  614852 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0919 23:24:08.930657  614852 pod_ready.go:40] duration metric: took 4m0.000400198s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:08.932547  614852 out.go:203] 
	W0919 23:24:08.933705  614852 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0919 23:24:08.934737  614852 out.go:203] 
	W0919 23:24:05.242815  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:24:07.242885  632630 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Sep 19 23:19:49 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:19:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/efda4b3258a503f43a6c1cb0b2107048ca758dd2c24bacf94c12a27071c265fc/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 23:19:49 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:19:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a1d0a4a5e8ac2c0aac0ca67a7724aabb9e352d6fa14a365f56c6223ad30ae96/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 23:19:49 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:19:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bdd0c30144387a690b6fe45b9436e6731783b85938e42fc56e12075f27c5266/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 23:19:49 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:19:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/59520b69eca50538e65dab4970b914fce0a760bb3a5067b586bae8ea6bc4dc67/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 23:20:07 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:20:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cac3bad84092bf135d669c5b8bea78e60a2844e262a7d62824e432d0e6953676/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 23:20:07 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:20:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/161796527f3afd533a2b4bc534a0cd043e92f1b8775e7f508dc61fe7b5ceed38/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 23:20:07 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:20:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0223c82d1c5673c4941950f364aea7cea94b1f77f4770aa24a5e6e2bc70ca0e5/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Sep 19 23:20:08 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:08.067204136Z" level=info msg="ignoring event" container=fdd935274939ff68c5a2d549d2e66c48be3c7e8588b8ac2b0c64c2a256c15de3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:20:08 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:08.514999974Z" level=info msg="ignoring event" container=2c4c79485ca80bee8cbb8138640f4d4d52b44bb92d2243562e4d66ed37c52d8e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:20:09 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:20:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87a2687e1a855e5c09295832a97d39e0f8f2f71d6ef5433b7393673d2f5c33e8/resolv.conf as [nameserver 192.168.103.1 search local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Sep 19 23:20:14 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:20:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 23:20:22 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:22.042231011Z" level=info msg="ignoring event" container=f4c216527cb1246834215127988cc7df437cecad728af5863b83295c968deb10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:20:22 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:22.213931890Z" level=info msg="ignoring event" container=cac3bad84092bf135d669c5b8bea78e60a2844e262a7d62824e432d0e6953676 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:20:24 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:24.328147303Z" level=info msg="ignoring event" container=7bf2be49f3c31a0009bf45cd60df6c9ee82d93b2067210e01e1fc94c3d17b1f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:20:39 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:39.186372762Z" level=info msg="ignoring event" container=807ea436375cef8d06cb8cfc9701ac89c64132e92006792e31470c986d78d8b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:20:53 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:20:53.334152072Z" level=info msg="ignoring event" container=464204e5f13c7401a478fd00af6654693df7e3d49a4bb872de4dfad95f737b68 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:21:09 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:21:09.635359481Z" level=info msg="ignoring event" container=2e38973d4febabf13dabaf5d7c6bc78f230618efdd5234d0d7dece1edf311de1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:21:14 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:21:14Z" level=error msg="error getting RW layer size for container ID '807ea436375cef8d06cb8cfc9701ac89c64132e92006792e31470c986d78d8b6': Error response from daemon: No such container: 807ea436375cef8d06cb8cfc9701ac89c64132e92006792e31470c986d78d8b6"
	Sep 19 23:21:14 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:21:14Z" level=error msg="Set backoffDuration to : 1m0s for container ID '807ea436375cef8d06cb8cfc9701ac89c64132e92006792e31470c986d78d8b6'"
	Sep 19 23:21:45 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:21:45.322234778Z" level=info msg="ignoring event" container=113fbda57d5cdb23ca18cd9d918d8b839ebf3f026ebea1b38517492a92c98276 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:21:52 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:21:52.611179043Z" level=info msg="ignoring event" container=36ba4543e8e1c257739ab783ba2debaec45a6ad2a6a2ec4b9734e14722c0f424 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:21:54 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:21:54Z" level=error msg="error getting RW layer size for container ID '464204e5f13c7401a478fd00af6654693df7e3d49a4bb872de4dfad95f737b68': Error response from daemon: No such container: 464204e5f13c7401a478fd00af6654693df7e3d49a4bb872de4dfad95f737b68"
	Sep 19 23:21:54 old-k8s-version-359569 cri-dockerd[1427]: time="2025-09-19T23:21:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '464204e5f13c7401a478fd00af6654693df7e3d49a4bb872de4dfad95f737b68'"
	Sep 19 23:22:51 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:22:51.266080245Z" level=info msg="ignoring event" container=d27a6856facf69d433013be23c1f3aeeab2a0989de874bcf58fb8183f1951a96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:23:07 old-k8s-version-359569 dockerd[1122]: time="2025-09-19T23:23:07.305851378Z" level=info msg="ignoring event" container=df3e2740a01cd45c55ef2faeaedaeee5a2057134660954ba910b8dbd68535ce2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fb5b9337f2332       6e38f40d628db       29 seconds ago       Running             storage-provisioner       4                   87a2687e1a855       storage-provisioner
	df3e2740a01cd       ea1030da44aa1       About a minute ago   Exited              kube-proxy                5                   0223c82d1c567       kube-proxy-hvp2z
	d27a6856facf6       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   87a2687e1a855       storage-provisioner
	07d67388cf6cf       ead0a4a53df89       4 minutes ago        Running             coredns                   0                   161796527f3af       coredns-5dd5756b68-q75nl
	16a6dcf2464a7       bb5e0dde9054c       4 minutes ago        Running             kube-apiserver            0                   59520b69eca50       kube-apiserver-old-k8s-version-359569
	d2da53d03680f       4be79c38a4bab       4 minutes ago        Running             kube-controller-manager   0                   5bdd0c3014438       kube-controller-manager-old-k8s-version-359569
	a6ca7dd11600f       73deb9a3f7025       4 minutes ago        Running             etcd                      0                   0a1d0a4a5e8ac       etcd-old-k8s-version-359569
	dc91f93ea3d06       f6f496300a2ae       4 minutes ago        Running             kube-scheduler            0                   efda4b3258a50       kube-scheduler-old-k8s-version-359569
	
	
	==> coredns [07d67388cf6c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-359569
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-359569
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=old-k8s-version-359569
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_19_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:19:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-359569
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:24:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:20:14 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:20:14 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:20:14 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:20:14 +0000   Fri, 19 Sep 2025 23:19:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-359569
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 5529e7c3f88c41fa8b9231e75cbabe58
	  System UUID:                5a3ce1f6-0d12-4d86-96a8-fc8a854ce373
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-q75nl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m2s
	  kube-system                 etcd-old-k8s-version-359569                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m15s
	  kube-system                 kube-apiserver-old-k8s-version-359569             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-controller-manager-old-k8s-version-359569    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-hvp2z                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-old-k8s-version-359569             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m15s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m2s                   node-controller  Node old-k8s-version-359569 event: Registered Node old-k8s-version-359569 in Controller
	
	
	==> dmesg <==
	[  +0.799303] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 7a 35 be de 38 08 06
	[Sep19 23:24] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.413300] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.153819] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.000034] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +1.501779] net_ratelimit: 3 callbacks suppressed
	[  +0.000005] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.498271] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.434983] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.413211] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.153058] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.002257] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.933367] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.064934] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.142279] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.205259] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +2.589254] net_ratelimit: 8 callbacks suppressed
	[  +0.000004] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev bridge
	[  +0.063694] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.142379] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.207951] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.154099] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.138663] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	[  +0.499177] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev bridge
	
	
	==> etcd [a6ca7dd11600] <==
	{"level":"info","ts":"2025-09-19T23:19:49.392151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-09-19T23:19:49.3923Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-09-19T23:19:49.3938Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T23:19:49.393999Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:19:49.394027Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:19:49.394056Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-19T23:19:49.394104Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-19T23:19:49.684814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-09-19T23:19:49.684882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-09-19T23:19:49.684903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-09-19T23:19:49.684921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.68493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.684943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.68496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.686009Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.686693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:19:49.686809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:19:49.686978Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-19T23:19:49.687046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-19T23:19:49.687064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.687283Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.687402Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.686679Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-359569 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-19T23:19:49.688392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-19T23:19:49.689083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 23:24:10 up  2:06,  0 users,  load average: 1.89, 2.76, 3.41
	Linux old-k8s-version-359569 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [16a6dcf2464a] <==
	I0919 23:19:52.435275       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 23:19:52.471426       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 23:19:52.478044       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0919 23:19:52.479191       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 23:19:52.482893       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:19:53.025711       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 23:19:54.051998       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 23:19:54.064062       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 23:19:54.075465       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 23:20:07.136137       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0919 23:20:07.287102       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0919 23:20:50.960806       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:50.960876       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:50.965280       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:50.965281       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:50.965309       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.965381       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.965386       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.965484       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.966032       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.966032       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.966040       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.966218       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.966242       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.966265       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	
	
	==> kube-controller-manager [d2da53d03680] <==
	I0919 23:20:22.262027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="126.713µs"
	I0919 23:20:22.397025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.81µs"
	I0919 23:20:22.407881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.349µs"
	I0919 23:20:22.411443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.915µs"
	E0919 23:20:50.404846       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:50.404849       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.673061       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.673794       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.675299       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.676048       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.405139       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.405140       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.674370       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.675802       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.676323       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.406111       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.406144       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.674482       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.676638       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.676640       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.407306       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.407318       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.674821       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.676948       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.676948       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	
	
	==> kube-proxy [df3e2740a01c] <==
	E0919 23:23:07.286524       1 run.go:74] "command failed" err="failed complete: too many open files"
	
	
	==> kube-scheduler [dc91f93ea3d0] <==
	E0919 23:19:51.050860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 23:19:51.050936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.050957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.050958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:51.050961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 23:19:51.051051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 23:19:51.876106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 23:19:51.876148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 23:19:51.951405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 23:19:51.951449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 23:19:51.979303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:51.979350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 23:19:51.979365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.979382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 23:19:51.990556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.990608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:52.017451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 23:19:52.017493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 23:19:52.032120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 23:19:52.032165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 23:19:52.238694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:52.238739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:52.240416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 23:19:52.240457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0919 23:19:52.544907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 23:22:42 old-k8s-version-359569 kubelet[2420]: E0919 23:22:42.136734    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	Sep 19 23:22:51 old-k8s-version-359569 kubelet[2420]: I0919 23:22:51.311353    2420 scope.go:117] "RemoveContainer" containerID="36ba4543e8e1c257739ab783ba2debaec45a6ad2a6a2ec4b9734e14722c0f424"
	Sep 19 23:22:51 old-k8s-version-359569 kubelet[2420]: I0919 23:22:51.311780    2420 scope.go:117] "RemoveContainer" containerID="d27a6856facf69d433013be23c1f3aeeab2a0989de874bcf58fb8183f1951a96"
	Sep 19 23:22:51 old-k8s-version-359569 kubelet[2420]: E0919 23:22:51.312056    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ef0a9cd7-6497-4877-8fc6-286067f0db01)\"" pod="kube-system/storage-provisioner" podUID="ef0a9cd7-6497-4877-8fc6-286067f0db01"
	Sep 19 23:22:55 old-k8s-version-359569 kubelet[2420]: I0919 23:22:55.135423    2420 scope.go:117] "RemoveContainer" containerID="113fbda57d5cdb23ca18cd9d918d8b839ebf3f026ebea1b38517492a92c98276"
	Sep 19 23:22:55 old-k8s-version-359569 kubelet[2420]: E0919 23:22:55.135680    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	Sep 19 23:23:02 old-k8s-version-359569 kubelet[2420]: I0919 23:23:02.136067    2420 scope.go:117] "RemoveContainer" containerID="d27a6856facf69d433013be23c1f3aeeab2a0989de874bcf58fb8183f1951a96"
	Sep 19 23:23:02 old-k8s-version-359569 kubelet[2420]: E0919 23:23:02.136278    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ef0a9cd7-6497-4877-8fc6-286067f0db01)\"" pod="kube-system/storage-provisioner" podUID="ef0a9cd7-6497-4877-8fc6-286067f0db01"
	Sep 19 23:23:07 old-k8s-version-359569 kubelet[2420]: I0919 23:23:07.136263    2420 scope.go:117] "RemoveContainer" containerID="113fbda57d5cdb23ca18cd9d918d8b839ebf3f026ebea1b38517492a92c98276"
	Sep 19 23:23:07 old-k8s-version-359569 kubelet[2420]: I0919 23:23:07.403272    2420 scope.go:117] "RemoveContainer" containerID="113fbda57d5cdb23ca18cd9d918d8b839ebf3f026ebea1b38517492a92c98276"
	Sep 19 23:23:07 old-k8s-version-359569 kubelet[2420]: I0919 23:23:07.403738    2420 scope.go:117] "RemoveContainer" containerID="df3e2740a01cd45c55ef2faeaedaeee5a2057134660954ba910b8dbd68535ce2"
	Sep 19 23:23:07 old-k8s-version-359569 kubelet[2420]: E0919 23:23:07.404154    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	Sep 19 23:23:14 old-k8s-version-359569 kubelet[2420]: I0919 23:23:14.136254    2420 scope.go:117] "RemoveContainer" containerID="d27a6856facf69d433013be23c1f3aeeab2a0989de874bcf58fb8183f1951a96"
	Sep 19 23:23:14 old-k8s-version-359569 kubelet[2420]: E0919 23:23:14.136574    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ef0a9cd7-6497-4877-8fc6-286067f0db01)\"" pod="kube-system/storage-provisioner" podUID="ef0a9cd7-6497-4877-8fc6-286067f0db01"
	Sep 19 23:23:19 old-k8s-version-359569 kubelet[2420]: I0919 23:23:19.135129    2420 scope.go:117] "RemoveContainer" containerID="df3e2740a01cd45c55ef2faeaedaeee5a2057134660954ba910b8dbd68535ce2"
	Sep 19 23:23:19 old-k8s-version-359569 kubelet[2420]: E0919 23:23:19.135527    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	Sep 19 23:23:28 old-k8s-version-359569 kubelet[2420]: I0919 23:23:28.136212    2420 scope.go:117] "RemoveContainer" containerID="d27a6856facf69d433013be23c1f3aeeab2a0989de874bcf58fb8183f1951a96"
	Sep 19 23:23:28 old-k8s-version-359569 kubelet[2420]: E0919 23:23:28.136443    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ef0a9cd7-6497-4877-8fc6-286067f0db01)\"" pod="kube-system/storage-provisioner" podUID="ef0a9cd7-6497-4877-8fc6-286067f0db01"
	Sep 19 23:23:33 old-k8s-version-359569 kubelet[2420]: I0919 23:23:33.135726    2420 scope.go:117] "RemoveContainer" containerID="df3e2740a01cd45c55ef2faeaedaeee5a2057134660954ba910b8dbd68535ce2"
	Sep 19 23:23:33 old-k8s-version-359569 kubelet[2420]: E0919 23:23:33.136241    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	Sep 19 23:23:40 old-k8s-version-359569 kubelet[2420]: I0919 23:23:40.135980    2420 scope.go:117] "RemoveContainer" containerID="d27a6856facf69d433013be23c1f3aeeab2a0989de874bcf58fb8183f1951a96"
	Sep 19 23:23:47 old-k8s-version-359569 kubelet[2420]: I0919 23:23:47.135742    2420 scope.go:117] "RemoveContainer" containerID="df3e2740a01cd45c55ef2faeaedaeee5a2057134660954ba910b8dbd68535ce2"
	Sep 19 23:23:47 old-k8s-version-359569 kubelet[2420]: E0919 23:23:47.136097    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	Sep 19 23:24:02 old-k8s-version-359569 kubelet[2420]: I0919 23:24:02.135633    2420 scope.go:117] "RemoveContainer" containerID="df3e2740a01cd45c55ef2faeaedaeee5a2057134660954ba910b8dbd68535ce2"
	Sep 19 23:24:02 old-k8s-version-359569 kubelet[2420]: E0919 23:24:02.135897    2420 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-proxy pod=kube-proxy-hvp2z_kube-system(8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be)\"" pod="kube-system/kube-proxy-hvp2z" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be"
	
	
	==> storage-provisioner [d27a6856facf] <==
	I0919 23:22:21.247918       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:22:51.250086       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fb5b9337f233] <==
	I0919 23:23:40.264539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-359569 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (277.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-359569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569: exit status 2 (300.194642ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-359569 -n old-k8s-version-359569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-359569 -n old-k8s-version-359569: exit status 2 (313.126173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-359569 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569: exit status 2 (413.407576ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-359569 -n old-k8s-version-359569
E0919 23:26:10.084386  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-359569 -n old-k8s-version-359569: exit status 2 (392.79738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-359569
helpers_test.go:243: (dbg) docker inspect old-k8s-version-359569:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05",
	        "Created": "2025-09-19T23:19:37.347852462Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 661119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:24:33.056461271Z",
	            "FinishedAt": "2025-09-19T23:24:32.248759953Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/hosts",
	        "LogPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05-json.log",
	        "Name": "/old-k8s-version-359569",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-359569:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-359569",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05",
	                "LowerDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-359569",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-359569/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-359569",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-359569",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-359569",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e88a76e3335fbecc57e02a3ae7db909ac48d0ae49aae9e7c2d5f0fa5cd07467",
	            "SandboxKey": "/var/run/docker/netns/3e88a76e3335",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-359569": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:dd:97:d1:85:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1de8892b98e15a33d7c5eadc8f8aa4724fe6ba0a68c7bcaff3b9263e169c715",
	                    "EndpointID": "6dae4efcd74c9ce200d72a471c4968e75305f016140ccacfe8d3059354c0e548",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-359569",
	                        "1ae574ad604d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359569 -n old-k8s-version-359569
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359569 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-359569 logs -n 25: (1.851236546s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p kubenet-361266 sudo crio config                                                                                                                                                                                                              │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p kubenet-361266                                                                                                                                                                                                                               │ kubenet-361266               │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p disable-driver-mounts-481061                                                                                                                                                                                                                 │ disable-driver-mounts-481061 │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-359569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ stop    │ -p old-k8s-version-359569 --alsologtostderr -v=3                                                                                                                                                                                                │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-359569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-834234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ stop    │ -p no-preload-834234 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-834234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ start   │ -p no-preload-834234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-253767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ stop    │ -p embed-certs-253767 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-485703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ stop    │ -p default-k8s-diff-port-485703 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-253767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ start   │ -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-485703 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ start   │ -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │                     │
	│ image   │ old-k8s-version-359569 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ pause   │ -p old-k8s-version-359569 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ unpause │ -p old-k8s-version-359569 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ image   │ no-preload-834234 image list --format=json                                                                                                                                                                                                      │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ pause   │ -p no-preload-834234 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:25:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:25:29.944138  674837 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:25:29.944250  674837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:25:29.944258  674837 out.go:374] Setting ErrFile to fd 2...
	I0919 23:25:29.944264  674837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:25:29.944514  674837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:25:29.944978  674837 out.go:368] Setting JSON to false
	I0919 23:25:29.946216  674837 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7666,"bootTime":1758316664,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:25:29.946322  674837 start.go:140] virtualization: kvm guest
	I0919 23:25:29.948459  674837 out.go:179] * [default-k8s-diff-port-485703] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:25:29.949978  674837 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:25:29.950016  674837 notify.go:220] Checking for updates...
	I0919 23:25:29.952179  674837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:25:29.953222  674837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:29.954203  674837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 23:25:29.955131  674837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:25:29.956129  674837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:25:29.957807  674837 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:29.958552  674837 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:25:29.984101  674837 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:25:29.984220  674837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:25:30.039323  674837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:25:30.027980594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:25:30.039477  674837 docker.go:318] overlay module found
	I0919 23:25:30.041158  674837 out.go:179] * Using the docker driver based on existing profile
	I0919 23:25:30.042269  674837 start.go:304] selected driver: docker
	I0919 23:25:30.042286  674837 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:30.042399  674837 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:25:30.043110  674837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:25:30.102993  674837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:25:30.092119612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:25:30.103297  674837 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:25:30.103327  674837 cni.go:84] Creating CNI manager for ""
	I0919 23:25:30.103387  674837 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:25:30.103426  674837 start.go:348] cluster config:
	{Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:30.105302  674837 out.go:179] * Starting "default-k8s-diff-port-485703" primary control-plane node in "default-k8s-diff-port-485703" cluster
	I0919 23:25:30.106292  674837 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 23:25:30.107241  674837 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:25:30.108113  674837 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:25:30.108141  674837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:25:30.108159  674837 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 23:25:30.108182  674837 cache.go:58] Caching tarball of preloaded images
	I0919 23:25:30.108327  674837 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:25:30.108350  674837 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 23:25:30.108539  674837 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json ...
	I0919 23:25:30.127982  674837 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:25:30.128001  674837 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:25:30.128018  674837 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:25:30.128046  674837 start.go:360] acquireMachinesLock for default-k8s-diff-port-485703: {Name:mk6951b47a07a3f8003f766143829366ba3d9245 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:25:30.128110  674837 start.go:364] duration metric: took 40.216µs to acquireMachinesLock for "default-k8s-diff-port-485703"
	I0919 23:25:30.128133  674837 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:25:30.128142  674837 fix.go:54] fixHost starting: 
	I0919 23:25:30.128356  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:30.147490  674837 fix.go:112] recreateIfNeeded on default-k8s-diff-port-485703: state=Stopped err=<nil>
	W0919 23:25:30.147539  674837 fix.go:138] unexpected machine state, will restart: <nil>
	W0919 23:25:26.223906  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:28.721831  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:26.205917  673615 out.go:252] * Restarting existing docker container for "embed-certs-253767" ...
	I0919 23:25:26.205998  673615 cli_runner.go:164] Run: docker start embed-certs-253767
	I0919 23:25:26.479850  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:26.501321  673615 kic.go:430] container "embed-certs-253767" state is running.
	I0919 23:25:26.501793  673615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:25:26.523190  673615 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/config.json ...
	I0919 23:25:26.523458  673615 machine.go:93] provisionDockerMachine start ...
	I0919 23:25:26.523555  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:26.544548  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:26.544902  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:26.544920  673615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:25:26.545682  673615 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43034->127.0.0.1:33158: read: connection reset by peer
	I0919 23:25:29.684602  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253767
	
	I0919 23:25:29.684646  673615 ubuntu.go:182] provisioning hostname "embed-certs-253767"
	I0919 23:25:29.684721  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:29.703720  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:29.703921  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:29.703934  673615 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253767 && echo "embed-certs-253767" | sudo tee /etc/hostname
	I0919 23:25:29.871799  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253767
	
	I0919 23:25:29.871865  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:29.890816  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:29.891092  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:29.891122  673615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253767/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:25:30.033720  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:30.033769  673615 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:25:30.033797  673615 ubuntu.go:190] setting up certificates
	I0919 23:25:30.033811  673615 provision.go:84] configureAuth start
	I0919 23:25:30.033872  673615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:25:30.052684  673615 provision.go:143] copyHostCerts
	I0919 23:25:30.052755  673615 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:25:30.052778  673615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:25:30.052863  673615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:25:30.053044  673615 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:25:30.053057  673615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:25:30.053097  673615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:25:30.053198  673615 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:25:30.053209  673615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:25:30.053244  673615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:25:30.053332  673615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253767 san=[127.0.0.1 192.168.94.2 embed-certs-253767 localhost minikube]
	I0919 23:25:30.234528  673615 provision.go:177] copyRemoteCerts
	I0919 23:25:30.234605  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:25:30.234674  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.257631  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:30.361350  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:25:30.389697  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:25:30.419544  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:25:30.448178  673615 provision.go:87] duration metric: took 414.351604ms to configureAuth
	I0919 23:25:30.448208  673615 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:25:30.448371  673615 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:30.448415  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.465572  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.465792  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:30.465803  673615 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:25:30.605066  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:25:30.605084  673615 ubuntu.go:71] root file system type: overlay
	I0919 23:25:30.605209  673615 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:25:30.605265  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.631307  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.631653  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:30.631765  673615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:25:30.798756  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:25:30.798841  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.818912  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.819493  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:30.819547  673615 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	W0919 23:25:28.194661  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:30.195559  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:32.195926  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:30.962591  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:30.962620  673615 machine.go:96] duration metric: took 4.439145746s to provisionDockerMachine
	I0919 23:25:30.962631  673615 start.go:293] postStartSetup for "embed-certs-253767" (driver="docker")
	I0919 23:25:30.962641  673615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:25:30.962702  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:25:30.962739  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.980604  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.077895  673615 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:25:31.081585  673615 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:25:31.081614  673615 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:25:31.081622  673615 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:25:31.081629  673615 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:25:31.081640  673615 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:25:31.081704  673615 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:25:31.081818  673615 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:25:31.081915  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:25:31.092920  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:31.119832  673615 start.go:296] duration metric: took 157.182424ms for postStartSetup
	I0919 23:25:31.119919  673615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:25:31.119957  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:31.138223  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.231108  673615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:25:31.235803  673615 fix.go:56] duration metric: took 5.057464858s for fixHost
	I0919 23:25:31.235827  673615 start.go:83] releasing machines lock for "embed-certs-253767", held for 5.057518817s
	I0919 23:25:31.235899  673615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:25:31.253706  673615 ssh_runner.go:195] Run: cat /version.json
	I0919 23:25:31.253762  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:31.253779  673615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:25:31.253846  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:31.273065  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.273279  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.438358  673615 ssh_runner.go:195] Run: systemctl --version
	I0919 23:25:31.443355  673615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:25:31.448118  673615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:25:31.467887  673615 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:25:31.467963  673615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:25:31.477879  673615 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:25:31.477911  673615 start.go:495] detecting cgroup driver to use...
	I0919 23:25:31.477948  673615 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:31.478067  673615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:31.495402  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:25:31.505927  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:25:31.516280  673615 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:25:31.516348  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:25:31.526965  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:31.537331  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:25:31.547987  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:31.558586  673615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:25:31.568224  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:25:31.578655  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:25:31.589139  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:25:31.599764  673615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:25:31.608667  673615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:25:31.617805  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:31.687545  673615 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:25:31.770333  673615 start.go:495] detecting cgroup driver to use...
	I0919 23:25:31.770382  673615 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:31.770426  673615 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:25:31.783922  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:31.796341  673615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:25:31.819064  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:31.833576  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:25:31.848452  673615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:31.868832  673615 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:25:31.872957  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:25:31.883296  673615 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:25:31.903423  673615 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:25:31.988302  673615 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:25:32.061857  673615 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:25:32.061989  673615 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:25:32.082566  673615 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:25:32.095079  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:32.167618  673615 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:25:33.003375  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:25:33.015216  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:25:33.026452  673615 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 23:25:33.038895  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:33.049653  673615 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:25:33.117398  673615 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:25:33.188911  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:33.264735  673615 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:25:33.286402  673615 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:25:33.297129  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:33.365641  673615 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:25:33.441273  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:33.454018  673615 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:25:33.454071  673615 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:25:33.457926  673615 start.go:563] Will wait 60s for crictl version
	I0919 23:25:33.457976  673615 ssh_runner.go:195] Run: which crictl
	I0919 23:25:33.461550  673615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:25:33.497887  673615 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:25:33.497957  673615 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:33.525153  673615 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:33.552270  673615 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:25:33.552361  673615 cli_runner.go:164] Run: docker network inspect embed-certs-253767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:25:33.569486  673615 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:25:33.573408  673615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:33.585675  673615 kubeadm.go:875] updating cluster {Name:embed-certs-253767 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:25:33.585819  673615 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:25:33.585885  673615 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:33.609143  673615 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:33.609163  673615 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:25:33.609218  673615 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:33.629836  673615 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:33.629860  673615 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:25:33.629873  673615 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0919 23:25:33.629982  673615 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-253767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:25:33.630118  673615 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:25:33.680444  673615 cni.go:84] Creating CNI manager for ""
	I0919 23:25:33.680484  673615 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:25:33.680510  673615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:25:33.680537  673615 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253767 NodeName:embed-certs-253767 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:25:33.680698  673615 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-253767"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:25:33.680771  673615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:25:33.690801  673615 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:25:33.690867  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:25:33.700842  673615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 23:25:33.719299  673615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:25:33.737940  673615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0919 23:25:33.756671  673615 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:25:33.760381  673615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:33.773375  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:33.841712  673615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:33.864983  673615 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767 for IP: 192.168.94.2
	I0919 23:25:33.865005  673615 certs.go:194] generating shared ca certs ...
	I0919 23:25:33.865024  673615 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:33.865198  673615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:25:33.865256  673615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:25:33.865269  673615 certs.go:256] generating profile certs ...
	I0919 23:25:33.865411  673615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.key
	I0919 23:25:33.865483  673615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key.590657ca
	I0919 23:25:33.865555  673615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key
	I0919 23:25:33.865698  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:25:33.865739  673615 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:25:33.865749  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:25:33.865781  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:25:33.865813  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:25:33.865841  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:25:33.865899  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:33.866723  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:25:33.892712  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:25:33.920169  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:25:33.957470  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:25:33.991717  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0919 23:25:34.022657  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:25:34.047553  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:25:34.071680  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:25:34.104406  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:25:34.137052  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:25:34.166651  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:25:34.197156  673615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:25:34.218328  673615 ssh_runner.go:195] Run: openssl version
	I0919 23:25:34.225260  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:25:34.236384  673615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:34.240556  673615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:34.240707  673615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:34.248711  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:25:34.258472  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:25:34.268343  673615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:25:34.271889  673615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:25:34.271940  673615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:25:34.279086  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:25:34.288830  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:25:34.299196  673615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:25:34.302981  673615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:25:34.303036  673615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:25:34.310231  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:25:34.319230  673615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:25:34.322686  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:25:34.329163  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:25:34.335396  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:25:34.341948  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:25:34.348461  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:25:34.356117  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:25:34.362408  673615 kubeadm.go:392] StartCluster: {Name:embed-certs-253767 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:34.362564  673615 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:25:34.381704  673615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:25:34.391242  673615 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:25:34.391258  673615 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:25:34.391300  673615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:25:34.401755  673615 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:25:34.402708  673615 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-253767" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:34.403198  673615 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-253767" cluster setting kubeconfig missing "embed-certs-253767" context setting]
	I0919 23:25:34.403987  673615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:34.406026  673615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:25:34.417779  673615 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0919 23:25:34.417814  673615 kubeadm.go:593] duration metric: took 26.549362ms to restartPrimaryControlPlane
	I0919 23:25:34.417826  673615 kubeadm.go:394] duration metric: took 55.428161ms to StartCluster
	I0919 23:25:34.417844  673615 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:34.417945  673615 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:34.419387  673615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:34.419640  673615 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:25:34.419725  673615 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:25:34.419833  673615 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253767"
	I0919 23:25:34.419854  673615 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-253767"
	I0919 23:25:34.419852  673615 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	W0919 23:25:34.419863  673615 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:25:34.419894  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.419903  673615 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253767"
	I0919 23:25:34.419921  673615 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253767"
	I0919 23:25:34.419949  673615 addons.go:69] Setting metrics-server=true in profile "embed-certs-253767"
	I0919 23:25:34.419979  673615 addons.go:238] Setting addon metrics-server=true in "embed-certs-253767"
	W0919 23:25:34.419988  673615 addons.go:247] addon metrics-server should already be in state true
	I0919 23:25:34.420062  673615 addons.go:69] Setting dashboard=true in profile "embed-certs-253767"
	I0919 23:25:34.420083  673615 addons.go:238] Setting addon dashboard=true in "embed-certs-253767"
	W0919 23:25:34.420091  673615 addons.go:247] addon dashboard should already be in state true
	I0919 23:25:34.420123  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.420233  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.420391  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.420605  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.420712  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.421471  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.421733  673615 out.go:179] * Verifying Kubernetes components...
	I0919 23:25:34.424787  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:34.457957  673615 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:25:34.458049  673615 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:25:34.460043  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:25:34.460071  673615 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:25:34.460160  673615 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:34.460181  673615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:25:34.460238  673615 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:25:34.460331  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.460394  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.462809  673615 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:25:30.149200  674837 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-485703" ...
	I0919 23:25:30.149273  674837 cli_runner.go:164] Run: docker start default-k8s-diff-port-485703
	I0919 23:25:30.387349  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:30.407247  674837 kic.go:430] container "default-k8s-diff-port-485703" state is running.
	I0919 23:25:30.407676  674837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:25:30.426594  674837 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json ...
	I0919 23:25:30.426854  674837 machine.go:93] provisionDockerMachine start ...
	I0919 23:25:30.426928  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:30.446414  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.446782  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:30.446804  674837 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:25:30.447602  674837 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58516->127.0.0.1:33164: read: connection reset by peer
	I0919 23:25:33.587482  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485703
	
	I0919 23:25:33.587546  674837 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-485703"
	I0919 23:25:33.587610  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:33.607632  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:33.607911  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:33.607927  674837 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-485703 && echo "default-k8s-diff-port-485703" | sudo tee /etc/hostname
	I0919 23:25:33.755479  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485703
	
	I0919 23:25:33.755591  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:33.774588  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:33.774800  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:33.774817  674837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-485703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-485703/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-485703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:25:33.912238  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:33.912266  674837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:25:33.912286  674837 ubuntu.go:190] setting up certificates
	I0919 23:25:33.912297  674837 provision.go:84] configureAuth start
	I0919 23:25:33.912358  674837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:25:33.937853  674837 provision.go:143] copyHostCerts
	I0919 23:25:33.937916  674837 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:25:33.937934  674837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:25:33.938004  674837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:25:33.938149  674837 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:25:33.938166  674837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:25:33.938212  674837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:25:33.938321  674837 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:25:33.938334  674837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:25:33.938376  674837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:25:33.938493  674837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-485703 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-485703 localhost minikube]
	I0919 23:25:34.741723  674837 provision.go:177] copyRemoteCerts
	I0919 23:25:34.741806  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:25:34.741862  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:34.768145  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:34.879088  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:25:34.915427  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	W0919 23:25:30.722158  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:33.222613  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:34.463964  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:25:34.463985  673615 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:25:34.464327  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.467526  673615 addons.go:238] Setting addon default-storageclass=true in "embed-certs-253767"
	W0919 23:25:34.467550  673615 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:25:34.467580  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.470054  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.502904  673615 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:34.502928  673615 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:25:34.502997  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.503508  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.509839  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.513067  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.533679  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.575047  673615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:34.593003  673615 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253767" to be "Ready" ...
	I0919 23:25:34.656058  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:25:34.656090  673615 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:25:34.662778  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:34.663632  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:25:34.663656  673615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:25:34.677738  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:34.699318  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:25:34.699355  673615 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:25:34.704895  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:25:34.704922  673615 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:25:34.745554  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:25:34.745607  673615 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:25:34.746344  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:34.746368  673615 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0919 23:25:34.773324  673615 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.773377  673615 retry.go:31] will retry after 147.461987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.780583  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:25:34.780610  673615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:25:34.781970  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:34.790336  673615 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.790483  673615 retry.go:31] will retry after 355.110169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.814777  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:25:34.814817  673615 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:25:34.841413  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:25:34.841449  673615 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0919 23:25:34.871187  673615 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.871221  673615 retry.go:31] will retry after 154.367143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.874274  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:25:34.874300  673615 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:25:34.901847  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:25:34.901892  673615 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:25:34.920987  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:34.939454  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:34.939529  673615 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:25:34.975758  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:35.026033  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:35.146294  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:36.784314  673615 node_ready.go:49] node "embed-certs-253767" is "Ready"
	I0919 23:25:36.784347  673615 node_ready.go:38] duration metric: took 2.191280252s for node "embed-certs-253767" to be "Ready" ...
	I0919 23:25:36.784369  673615 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:25:36.784434  673615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:25:37.558995  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.63795515s)
	I0919 23:25:37.559336  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.583525608s)
	I0919 23:25:37.559403  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.533344282s)
	I0919 23:25:37.559744  673615 addons.go:479] Verifying addon metrics-server=true in "embed-certs-253767"
	I0919 23:25:37.559431  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.41311586s)
	I0919 23:25:37.559477  673615 api_server.go:72] duration metric: took 3.139802295s to wait for apiserver process to appear ...
	I0919 23:25:37.560135  673615 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:25:37.560158  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:37.561307  673615 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-253767 addons enable metrics-server
	
	I0919 23:25:37.569215  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:37.569267  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:37.577309  673615 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0919 23:25:34.196181  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:36.199965  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:34.957009  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:25:34.993954  674837 provision.go:87] duration metric: took 1.081535884s to configureAuth
	I0919 23:25:34.993994  674837 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:25:34.994237  674837 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:34.994309  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.017357  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:35.017635  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:35.017653  674837 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:25:35.168288  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:25:35.168314  674837 ubuntu.go:71] root file system type: overlay
	I0919 23:25:35.168467  674837 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:25:35.168608  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.193679  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:35.193981  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:35.194082  674837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:25:35.347805  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:25:35.347899  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.368801  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:35.369117  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:35.369144  674837 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:25:35.516679  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:35.516718  674837 machine.go:96] duration metric: took 5.089846476s to provisionDockerMachine
	I0919 23:25:35.516732  674837 start.go:293] postStartSetup for "default-k8s-diff-port-485703" (driver="docker")
	I0919 23:25:35.516746  674837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:25:35.516829  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:25:35.516873  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.536305  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:35.635604  674837 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:25:35.639133  674837 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:25:35.639160  674837 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:25:35.639168  674837 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:25:35.639174  674837 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:25:35.639184  674837 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:25:35.639227  674837 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:25:35.639305  674837 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:25:35.639411  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:25:35.648457  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:35.672674  674837 start.go:296] duration metric: took 155.926949ms for postStartSetup
	I0919 23:25:35.672754  674837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:25:35.672822  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.697684  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:35.793585  674837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:25:35.799221  674837 fix.go:56] duration metric: took 5.671063757s for fixHost
	I0919 23:25:35.799275  674837 start.go:83] releasing machines lock for "default-k8s-diff-port-485703", held for 5.671149761s
	I0919 23:25:35.799358  674837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:25:35.819941  674837 ssh_runner.go:195] Run: cat /version.json
	I0919 23:25:35.819971  674837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:25:35.820005  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.820067  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.843007  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:35.843513  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:36.030660  674837 ssh_runner.go:195] Run: systemctl --version
	I0919 23:25:36.037062  674837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:25:36.042649  674837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:25:36.067796  674837 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:25:36.067888  674837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:25:36.079597  674837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:25:36.079627  674837 start.go:495] detecting cgroup driver to use...
	I0919 23:25:36.079665  674837 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:36.079778  674837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:36.100659  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:25:36.112782  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:25:36.124598  674837 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:25:36.124656  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:25:36.136320  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:36.147880  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:25:36.159385  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:36.170114  674837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:25:36.181177  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:25:36.194719  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:25:36.207625  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:25:36.219742  674837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:25:36.231890  674837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:25:36.243222  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:36.322441  674837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:25:36.408490  674837 start.go:495] detecting cgroup driver to use...
	I0919 23:25:36.408595  674837 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:36.408653  674837 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:25:36.421578  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:36.433325  674837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:25:36.457353  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:36.471564  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:25:36.483483  674837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:36.502383  674837 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:25:36.506157  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:25:36.515116  674837 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:25:36.533279  674837 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:25:36.609251  674837 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:25:36.695276  674837 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:25:36.695408  674837 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:25:36.721394  674837 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:25:36.737767  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:36.834649  674837 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:25:37.752406  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:25:37.767052  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:25:37.783206  674837 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 23:25:37.800742  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:37.815579  674837 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:25:37.897776  674837 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:25:37.983340  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:38.052677  674837 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:25:38.080946  674837 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:25:38.093264  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:38.180601  674837 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:25:38.264844  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:38.276756  674837 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:25:38.276811  674837 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:25:38.280626  674837 start.go:563] Will wait 60s for crictl version
	I0919 23:25:38.280672  674837 ssh_runner.go:195] Run: which crictl
	I0919 23:25:38.284150  674837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:25:38.318450  674837 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:25:38.318532  674837 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:38.342018  674837 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:38.367928  674837 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:25:38.368004  674837 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-485703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:25:38.384188  674837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0919 23:25:38.388539  674837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:38.400834  674837 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:25:38.400954  674837 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:25:38.401002  674837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:38.420701  674837 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:38.420718  674837 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:25:38.420760  674837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:38.440928  674837 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:38.440953  674837 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:25:38.440965  674837 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.0 docker true true} ...
	I0919 23:25:38.441093  674837 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-485703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:25:38.441158  674837 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:25:38.492344  674837 cni.go:84] Creating CNI manager for ""
	I0919 23:25:38.492389  674837 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:25:38.492405  674837 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:25:38.492437  674837 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-485703 NodeName:default-k8s-diff-port-485703 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:25:38.492599  674837 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-485703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:25:38.492667  674837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:25:38.502838  674837 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:25:38.502912  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:25:38.512555  674837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0919 23:25:38.530876  674837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:25:38.550088  674837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0919 23:25:38.570120  674837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:25:38.573852  674837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:38.585399  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:38.653254  674837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:38.676776  674837 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703 for IP: 192.168.85.2
	I0919 23:25:38.676801  674837 certs.go:194] generating shared ca certs ...
	I0919 23:25:38.676822  674837 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:38.677046  674837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:25:38.677103  674837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:25:38.677118  674837 certs.go:256] generating profile certs ...
	I0919 23:25:38.677231  674837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.key
	I0919 23:25:38.677309  674837 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key.66b5ce16
	I0919 23:25:38.677358  674837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key
	I0919 23:25:38.677493  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:25:38.677626  674837 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:25:38.677642  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:25:38.677676  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:25:38.677719  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:25:38.677751  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:25:38.677808  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:38.678394  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:25:38.705947  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:25:38.734669  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:25:38.764863  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:25:38.790776  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:25:38.819841  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:25:38.848339  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:25:38.876786  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:25:38.905308  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:25:38.938353  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:25:38.963603  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:25:38.988188  674837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:25:39.006831  674837 ssh_runner.go:195] Run: openssl version
	I0919 23:25:39.012361  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:25:39.022912  674837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:25:39.026423  674837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:25:39.026475  674837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:25:39.033152  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:25:39.042171  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:25:39.051651  674837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:25:39.055019  674837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:25:39.055065  674837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:25:39.061797  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:25:39.070800  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:25:39.079941  674837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:39.083840  674837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:39.083886  674837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:39.091469  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:25:39.101098  674837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:25:39.104980  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:25:39.113163  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:25:39.120982  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:25:39.128712  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:25:39.136485  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:25:39.144048  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:25:39.152250  674837 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:39.152413  674837 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:25:39.176697  674837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:25:39.189840  674837 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:25:39.189863  674837 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:25:39.189915  674837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:25:39.201780  674837 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:25:39.202680  674837 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-485703" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:39.203231  674837 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-485703" cluster setting kubeconfig missing "default-k8s-diff-port-485703" context setting]
	I0919 23:25:39.204610  674837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:39.206748  674837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:25:39.218084  674837 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0919 23:25:39.218123  674837 kubeadm.go:593] duration metric: took 28.253396ms to restartPrimaryControlPlane
	I0919 23:25:39.218136  674837 kubeadm.go:394] duration metric: took 65.898139ms to StartCluster
	I0919 23:25:39.218159  674837 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:39.218253  674837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:39.220609  674837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:39.220908  674837 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:25:39.221038  674837 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:25:39.221175  674837 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221196  674837 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221211  674837 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221122  674837 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:39.221230  674837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-485703"
	I0919 23:25:39.221238  674837 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221250  674837 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-485703"
	W0919 23:25:39.221258  674837 addons.go:247] addon metrics-server should already be in state true
	I0919 23:25:39.221296  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.221204  674837 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-485703"
	I0919 23:25:39.221231  674837 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-485703"
	W0919 23:25:39.221474  674837 addons.go:247] addon dashboard should already be in state true
	I0919 23:25:39.221551  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.221638  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.221809  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.222010  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	W0919 23:25:39.222223  674837 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:25:39.222293  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.222837  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.224637  674837 out.go:179] * Verifying Kubernetes components...
	I0919 23:25:39.225647  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:39.247508  674837 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-485703"
	W0919 23:25:39.247537  674837 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:25:39.247576  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.248037  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.248052  674837 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:25:39.249030  674837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:25:39.249034  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:25:39.249153  674837 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:25:39.249218  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.252333  674837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:25:39.252395  674837 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:39.252420  674837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:25:39.252486  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.254741  674837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:25:39.255724  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:25:39.255746  674837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:25:39.255806  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.279725  674837 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:39.279754  674837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:25:39.279823  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.280152  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.284188  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.285454  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.304801  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.330415  674837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:39.398772  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:39.398991  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:25:39.399017  674837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:25:39.406228  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:25:39.406251  674837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:25:39.421926  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:39.423991  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:25:39.424015  674837 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:25:39.430092  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:25:39.430116  674837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:25:39.447109  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:39.447139  674837 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:25:39.450743  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:25:39.450767  674837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:25:39.451228  674837 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-485703" to be "Ready" ...
	I0919 23:25:39.472998  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:39.474109  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:25:39.474133  674837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0919 23:25:39.475405  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.475446  674837 retry.go:31] will retry after 347.691221ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.494589  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.494666  674837 retry.go:31] will retry after 347.211429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.497652  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:25:39.497699  674837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:25:39.522536  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:25:39.522584  674837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:25:39.546586  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:25:39.546617  674837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0919 23:25:39.548384  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.548412  674837 retry.go:31] will retry after 294.337604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.566012  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:25:39.566030  674837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:25:39.584679  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:39.584705  674837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:25:39.603131  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:39.659084  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.659126  674837 retry.go:31] will retry after 170.3526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.824257  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:39.829607  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:39.842950  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:39.842945  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:39.894925  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.894980  674837 retry.go:31] will retry after 225.409439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.897913  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.897964  674837 retry.go:31] will retry after 505.694132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.913611  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.913645  674837 retry.go:31] will retry after 376.475703ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.913676  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.913710  674837 retry.go:31] will retry after 258.235731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:35.721926  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:37.722686  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:40.222519  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:37.578970  673615 addons.go:514] duration metric: took 3.159245865s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0919 23:25:38.060882  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:38.066247  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:38.066275  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:38.560739  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:38.565272  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:38.565303  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:39.060625  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:39.065273  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:39.065302  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:39.560637  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:39.564973  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:39.565015  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:40.060260  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:40.064604  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:40.064631  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:40.560227  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:40.564386  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:40.564408  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:38.695125  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:41.195288  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:40.120637  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:40.172638  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:40.175640  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.175673  674837 retry.go:31] will retry after 463.562458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:40.233280  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.233322  674837 retry.go:31] will retry after 709.868249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.290287  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:40.346042  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.346070  674837 retry.go:31] will retry after 596.08637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.404268  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:40.464443  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.464480  674837 retry.go:31] will retry after 520.136858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.639715  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:40.695405  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.695451  674837 retry.go:31] will retry after 445.187627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.942744  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:40.943313  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:40.985184  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:41.014753  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.014794  674837 retry.go:31] will retry after 940.601778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:41.014794  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.014824  674837 retry.go:31] will retry after 1.053794311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:41.056387  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.056431  674837 retry.go:31] will retry after 475.710606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.141589  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:41.204433  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.204468  674837 retry.go:31] will retry after 1.290660505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:41.451870  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:41.533007  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:41.591782  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.591824  674837 retry.go:31] will retry after 1.793211427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.955544  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:42.015312  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.015347  674837 retry.go:31] will retry after 1.274198901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.069468  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:42.130804  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.130839  674837 retry.go:31] will retry after 762.247396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.495886  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:42.569630  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.569666  674837 retry.go:31] will retry after 1.252116034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.893718  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:42.955347  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.955386  674837 retry.go:31] will retry after 2.462259291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.290702  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:43.348777  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.348830  674837 retry.go:31] will retry after 1.606378233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.385981  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:43.443332  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.443368  674837 retry.go:31] will retry after 2.094940082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:43.452768  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:43.822145  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:43.880051  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.880086  674837 retry.go:31] will retry after 1.63512815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:42.722025  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:44.722773  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:41.060866  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:41.065935  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:41.065967  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:41.560293  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:41.564900  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:41.564935  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:42.060580  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:42.065948  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0919 23:25:42.066916  673615 api_server.go:141] control plane version: v1.34.0
	I0919 23:25:42.066941  673615 api_server.go:131] duration metric: took 4.506796265s to wait for apiserver health ...
	I0919 23:25:42.066949  673615 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:25:42.070613  673615 system_pods.go:59] 8 kube-system pods found
	I0919 23:25:42.070647  673615 system_pods.go:61] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:42.070658  673615 system_pods.go:61] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:25:42.070696  673615 system_pods.go:61] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:25:42.070705  673615 system_pods.go:61] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:25:42.070713  673615 system_pods.go:61] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:25:42.070721  673615 system_pods.go:61] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:25:42.070737  673615 system_pods.go:61] "metrics-server-746fcd58dc-sptn4" [4decf9fa-5593-4e44-9579-ba7f183d4fed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:25:42.070742  673615 system_pods.go:61] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Running
	I0919 23:25:42.070751  673615 system_pods.go:74] duration metric: took 3.795459ms to wait for pod list to return data ...
	I0919 23:25:42.070760  673615 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:25:42.073120  673615 default_sa.go:45] found service account: "default"
	I0919 23:25:42.073139  673615 default_sa.go:55] duration metric: took 2.372853ms for default service account to be created ...
	I0919 23:25:42.073147  673615 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:25:42.075734  673615 system_pods.go:86] 8 kube-system pods found
	I0919 23:25:42.075759  673615 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:42.075766  673615 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:25:42.075774  673615 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:25:42.075782  673615 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:25:42.075789  673615 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:25:42.075794  673615 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:25:42.075802  673615 system_pods.go:89] "metrics-server-746fcd58dc-sptn4" [4decf9fa-5593-4e44-9579-ba7f183d4fed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:25:42.075805  673615 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Running
	I0919 23:25:42.075817  673615 system_pods.go:126] duration metric: took 2.659095ms to wait for k8s-apps to be running ...
	I0919 23:25:42.075826  673615 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:25:42.075863  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:25:42.088470  673615 system_svc.go:56] duration metric: took 12.622937ms WaitForService to wait for kubelet
	I0919 23:25:42.088493  673615 kubeadm.go:578] duration metric: took 7.668820061s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:25:42.088563  673615 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:25:42.091827  673615 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:25:42.091853  673615 node_conditions.go:123] node cpu capacity is 8
	I0919 23:25:42.091866  673615 node_conditions.go:105] duration metric: took 3.29892ms to run NodePressure ...
	I0919 23:25:42.091878  673615 start.go:241] waiting for startup goroutines ...
	I0919 23:25:42.091884  673615 start.go:246] waiting for cluster config update ...
	I0919 23:25:42.091900  673615 start.go:255] writing updated cluster config ...
	I0919 23:25:42.092207  673615 ssh_runner.go:195] Run: rm -f paused
	I0919 23:25:42.095921  673615 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:42.099879  673615 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tv82" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:25:44.104844  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:43.693879  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:45.694423  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:47.695639  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:44.955840  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:45.012801  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.012838  674837 retry.go:31] will retry after 3.847878931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.418368  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:45.482236  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.482273  674837 retry.go:31] will retry after 1.591517849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.515943  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:45.538727  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:45.577679  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.577718  674837 retry.go:31] will retry after 4.874202788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:45.601250  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.601285  674837 retry.go:31] will retry after 3.880703529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:45.951829  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:47.074084  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:47.138292  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:47.138325  674837 retry.go:31] will retry after 3.906263754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:47.952474  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:48.861736  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:48.930715  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:48.930754  674837 retry.go:31] will retry after 5.934858241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:49.482298  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:49.551667  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:49.551713  674837 retry.go:31] will retry after 4.622892988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:47.221542  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:49.221765  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:46.105987  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:48.605715  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:50.606208  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:50.194445  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:52.194756  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:49.952741  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:50.452954  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:50.519567  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:50.519612  674837 retry.go:31] will retry after 5.244482678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:51.045317  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:51.105296  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:51.105335  674837 retry.go:31] will retry after 8.297441162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:51.952829  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:54.175174  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:54.233382  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:54.233429  674837 retry.go:31] will retry after 6.050312194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:54.451892  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:54.866423  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:54.922478  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:54.922540  674837 retry.go:31] will retry after 9.107847114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:51.222298  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:53.721797  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:53.104877  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:55.104931  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:54.196279  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:54.693762  660928 pod_ready.go:94] pod "coredns-5dd5756b68-q75nl" is "Ready"
	I0919 23:25:54.693791  660928 pod_ready.go:86] duration metric: took 59.504989655s for pod "coredns-5dd5756b68-q75nl" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.696671  660928 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.700420  660928 pod_ready.go:94] pod "etcd-old-k8s-version-359569" is "Ready"
	I0919 23:25:54.700446  660928 pod_ready.go:86] duration metric: took 3.753342ms for pod "etcd-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.702876  660928 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.706864  660928 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-359569" is "Ready"
	I0919 23:25:54.706882  660928 pod_ready.go:86] duration metric: took 3.98047ms for pod "kube-apiserver-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.709313  660928 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.893212  660928 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-359569" is "Ready"
	I0919 23:25:54.893246  660928 pod_ready.go:86] duration metric: took 183.913473ms for pod "kube-controller-manager-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:55.093814  660928 pod_ready.go:83] waiting for pod "kube-proxy-hvp2z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:55.492837  660928 pod_ready.go:94] pod "kube-proxy-hvp2z" is "Ready"
	I0919 23:25:55.492867  660928 pod_ready.go:86] duration metric: took 399.028031ms for pod "kube-proxy-hvp2z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:55.693615  660928 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.092983  660928 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-359569" is "Ready"
	I0919 23:25:56.093016  660928 pod_ready.go:86] duration metric: took 399.373804ms for pod "kube-scheduler-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.093032  660928 pod_ready.go:40] duration metric: took 1m0.909722957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:56.139044  660928 start.go:617] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I0919 23:25:56.140388  660928 out.go:203] 
	W0919 23:25:56.141437  660928 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I0919 23:25:56.142544  660928 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0919 23:25:56.143770  660928 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-359569" cluster and "default" namespace by default
	I0919 23:25:56.222688  666828 pod_ready.go:94] pod "coredns-66bc5c9577-z2rcs" is "Ready"
	I0919 23:25:56.222713  666828 pod_ready.go:86] duration metric: took 37.006414087s for pod "coredns-66bc5c9577-z2rcs" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.225199  666828 pod_ready.go:83] waiting for pod "etcd-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.229161  666828 pod_ready.go:94] pod "etcd-no-preload-834234" is "Ready"
	I0919 23:25:56.229189  666828 pod_ready.go:86] duration metric: took 3.965294ms for pod "etcd-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.231384  666828 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.235209  666828 pod_ready.go:94] pod "kube-apiserver-no-preload-834234" is "Ready"
	I0919 23:25:56.235228  666828 pod_ready.go:86] duration metric: took 3.823926ms for pod "kube-apiserver-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.236996  666828 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.420952  666828 pod_ready.go:94] pod "kube-controller-manager-no-preload-834234" is "Ready"
	I0919 23:25:56.420979  666828 pod_ready.go:86] duration metric: took 183.963069ms for pod "kube-controller-manager-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.620959  666828 pod_ready.go:83] waiting for pod "kube-proxy-ljrsp" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.020879  666828 pod_ready.go:94] pod "kube-proxy-ljrsp" is "Ready"
	I0919 23:25:57.020909  666828 pod_ready.go:86] duration metric: took 399.925626ms for pod "kube-proxy-ljrsp" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.221140  666828 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.620967  666828 pod_ready.go:94] pod "kube-scheduler-no-preload-834234" is "Ready"
	I0919 23:25:57.620997  666828 pod_ready.go:86] duration metric: took 399.824833ms for pod "kube-scheduler-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.621011  666828 pod_ready.go:40] duration metric: took 38.416192153s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:57.669247  666828 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:25:57.671277  666828 out.go:179] * Done! kubectl is now configured to use "no-preload-834234" cluster and "default" namespace by default
	I0919 23:25:55.764272  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:55.827173  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:55.827221  674837 retry.go:31] will retry after 6.475736064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:56.452694  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	W0919 23:25:58.951882  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:59.403573  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:59.462484  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:59.462537  674837 retry.go:31] will retry after 6.573954523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:57.105049  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:59.105629  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	I0919 23:26:00.284343  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:26:00.342903  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:00.342936  674837 retry.go:31] will retry after 9.28995248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:00.951959  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:02.303221  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:26:02.361047  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:02.361086  674837 retry.go:31] will retry after 19.573085188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:02.952610  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:04.031187  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:26:04.103367  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:04.103405  674837 retry.go:31] will retry after 12.611866796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:01.105835  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:03.605633  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:05.451952  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:06.037622  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:26:06.094888  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:06.094920  674837 retry.go:31] will retry after 15.716692606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:07.452100  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:09.633151  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:26:06.105406  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:08.105556  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:10.105867  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	
	
	==> Docker <==
	Sep 19 23:25:06 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:06.838532791Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:06 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:06.838659576Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 19 23:25:06 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:25:06Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 19 23:25:06 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:06.974590534Z" level=info msg="ignoring event" container=6f8b342db16bd367d4d65941a4951ca3f826b1ed68fe977f9ccc1c87279e46c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:25:07 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:07.112077588Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 19 23:25:11 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:25:11Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 19 23:25:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:11.946022115Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:11.946069894Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:11.947793574Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 19 23:25:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:11.947824632Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:20 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:20.869034105Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:20 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:20.924145926Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:20 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:20.924268263Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 19 23:25:20 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:25:20Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 19 23:25:23 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:23.470816236Z" level=info msg="ignoring event" container=63c04f0517916ae38fdb13b4b0b8ca78204065a9545643175634b090d4b1324c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.879420971Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.879465215Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.881358553Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.881397959Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:41 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:41.952250813Z" level=info msg="ignoring event" container=12c2167f10bd3034771c5245f24d50d5de4222c935494183ccefd849b58749a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:25:48 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:48.862327339Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:49 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:49.152868193Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:49 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:49.153042723Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 19 23:25:49 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:25:49Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 19 23:26:10 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:26:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a2d9e9d5fed8c       07655ddf2eebe                                                                                         29 seconds ago       Running             kubernetes-dashboard      1                   c7548b753bce7       kubernetes-dashboard-8694d4445c-nlr4d
	9fa04b5a65fb2       6e38f40d628db                                                                                         33 seconds ago       Running             storage-provisioner       6                   bcb3f1d156708       storage-provisioner
	4726cd10ee5ef       ea1030da44aa1                                                                                         40 seconds ago       Running             kube-proxy                8                   6ae95751ec3e6       kube-proxy-hvp2z
	12c2167f10bd3       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        About a minute ago   Exited              kubernetes-dashboard      0                   c7548b753bce7       kubernetes-dashboard-8694d4445c-nlr4d
	6f8b342db16bd       ea1030da44aa1                                                                                         About a minute ago   Exited              kube-proxy                7                   6ae95751ec3e6       kube-proxy-hvp2z
	cf3b9fabeb74c       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   4cd1dfbfefb19       busybox
	b987ae99c1864       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   552a145ce96f5       coredns-5dd5756b68-q75nl
	63c04f0517916       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       5                   bcb3f1d156708       storage-provisioner
	037cf7c5d3be1       f6f496300a2ae                                                                                         About a minute ago   Running             kube-scheduler            1                   45ba06c4aa5bb       kube-scheduler-old-k8s-version-359569
	7f75bd54ab8d8       bb5e0dde9054c                                                                                         About a minute ago   Running             kube-apiserver            1                   6de24586c2ec0       kube-apiserver-old-k8s-version-359569
	92f5227dd2bce       4be79c38a4bab                                                                                         About a minute ago   Running             kube-controller-manager   1                   4b51b5626427a       kube-controller-manager-old-k8s-version-359569
	3c553bb4cb66a       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      1                   e8c784e603e43       etcd-old-k8s-version-359569
	24cba55f27ce7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   About a minute ago   Exited              busybox                   0                   93ec1d27ab925       busybox
	07d67388cf6cf       ead0a4a53df89                                                                                         6 minutes ago        Exited              coredns                   0                   161796527f3af       coredns-5dd5756b68-q75nl
	16a6dcf2464a7       bb5e0dde9054c                                                                                         6 minutes ago        Exited              kube-apiserver            0                   59520b69eca50       kube-apiserver-old-k8s-version-359569
	d2da53d03680f       4be79c38a4bab                                                                                         6 minutes ago        Exited              kube-controller-manager   0                   5bdd0c3014438       kube-controller-manager-old-k8s-version-359569
	a6ca7dd11600f       73deb9a3f7025                                                                                         6 minutes ago        Exited              etcd                      0                   0a1d0a4a5e8ac       etcd-old-k8s-version-359569
	dc91f93ea3d06       f6f496300a2ae                                                                                         6 minutes ago        Exited              kube-scheduler            0                   efda4b3258a50       kube-scheduler-old-k8s-version-359569
	
	
	==> coredns [07d67388cf6c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b987ae99c186] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33582 - 30568 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 6.001999861s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:40803->192.168.103.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:57889 - 5513 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 6.001761535s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:48812->192.168.103.1:53: i/o timeout
	[INFO] 127.0.0.1:43992 - 65473 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 4.001933173s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:55087->192.168.103.1:53: i/o timeout
	[INFO] 127.0.0.1:60983 - 51657 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 2.00102617s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:56781->192.168.103.1:53: i/o timeout
	[INFO] 127.0.0.1:40603 - 47036 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 2.00026162s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:53386->192.168.103.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:45749 - 41213 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 2.000554092s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:38992->192.168.103.1:53: i/o timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-359569
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-359569
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=old-k8s-version-359569
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_19_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:19:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-359569
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:26:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:26:10 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-359569
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 18f48281b08d485ab8cfd87318391c82
	  System UUID:                5a3ce1f6-0d12-4d86-96a8-fc8a854ce373
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 coredns-5dd5756b68-q75nl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m4s
	  kube-system                 etcd-old-k8s-version-359569                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m17s
	  kube-system                 kube-apiserver-old-k8s-version-359569             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-old-k8s-version-359569    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-hvp2z                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-old-k8s-version-359569             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 metrics-server-57f55c9bc5-rrcl7                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         110s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ddwj8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-nlr4d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 39s                    kube-proxy       
	  Normal  Starting                 6m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m23s (x8 over 6m23s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s (x8 over 6m23s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s (x7 over 6m23s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m17s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m4s                   node-controller  Node old-k8s-version-359569 event: Registered Node old-k8s-version-359569 in Controller
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x9 over 82s)      kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x7 over 82s)      kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                    node-controller  Node old-k8s-version-359569 event: Registered Node old-k8s-version-359569 in Controller
	  Normal  Starting                 2s                     kubelet          Starting kubelet.
	  Normal  Starting                 1s                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  1s                     kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    1s                     kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     1s                     kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             1s                     kubelet          Node old-k8s-version-359569 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  1s                     kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.648929] IPv4: martian source 10.244.0.1 from 10.244.0.8, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 05 15 13 13 8c 08 06
	[  +0.349912] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.005224] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.995125] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.506127] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.500833] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.994986] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.505925] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501603] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.993779] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.507835] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501321] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.990961] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[Sep19 23:26] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501557] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.990813] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.510399] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.500969] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.989916] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.510723] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501805] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.987992] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.513010] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501157] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	
	
	==> etcd [3c553bb4cb66] <==
	{"level":"info","ts":"2025-09-19T23:24:50.612642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-09-19T23:24:50.612677Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-09-19T23:24:50.612797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-09-19T23:24:50.612991Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:24:50.61358Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:24:50.616455Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T23:24:50.616842Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-19T23:24:50.616921Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-19T23:24:50.617331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:50.617555Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:51.703252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-19T23:24:51.7033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-19T23:24:51.703363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:24:51.703391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.703404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.703417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.703448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.704569Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-359569 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-19T23:24:51.704616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:24:51.704694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:24:51.704762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-19T23:24:51.704811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-19T23:24:51.706973Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-19T23:24:51.707057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-19T23:25:49.35754Z","caller":"traceutil/trace.go:171","msg":"trace[286202149] transaction","detail":"{read_only:false; response_revision:874; number_of_response:1; }","duration":"109.642649ms","start":"2025-09-19T23:25:49.247843Z","end":"2025-09-19T23:25:49.357486Z","steps":["trace[286202149] 'process raft request'  (duration: 104.807918ms)"],"step_count":1}
	
	
	==> etcd [a6ca7dd11600] <==
	{"level":"info","ts":"2025-09-19T23:19:49.684921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.68493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.684943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.68496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.686009Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.686693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:19:49.686809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:19:49.686978Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-19T23:19:49.687046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-19T23:19:49.687064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.687283Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.687402Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.686679Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-359569 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-19T23:19:49.688392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-19T23:19:49.689083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-19T23:24:22.066418Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-19T23:24:22.066594Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-359569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2025-09-19T23:24:22.066743Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T23:24:22.066773Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T23:24:22.068006Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T23:24:22.068132Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-19T23:24:22.088104Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2025-09-19T23:24:22.09009Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:22.090234Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:22.09029Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-359569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> kernel <==
	 23:26:12 up  2:08,  0 users,  load average: 2.02, 2.35, 3.17
	Linux old-k8s-version-359569 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [16a6dcf2464a] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 23:24:23.083175       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 23:24:23.083191       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 23:24:23.083180       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7f75bd54ab8d] <==
	I0919 23:24:54.654913       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.48.221"}
	W0919 23:24:55.094601       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 23:24:57.740361       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 request timed out
	I0919 23:24:57.740389       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 23:25:02.733379       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I0919 23:25:05.635085       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0919 23:25:05.733823       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:25:05.935777       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 23:25:05.935777       1 controller.go:624] quota admission added evaluator for: endpoints
	E0919 23:25:12.733742       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:25:22.734657       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:25:32.735163       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:25:42.736241       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I0919 23:25:52.632549       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.217.254:443: connect: connection refused
	I0919 23:25:52.632573       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 23:25:52.737506       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	W0919 23:25:53.738180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 23:25:53.738222       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 23:25:53.738232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:25:53.739307       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 23:25:53.739375       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 23:25:53.739393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:26:02.738186       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [92f5227dd2bc] <==
	I0919 23:25:12.150128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.987144ms"
	I0919 23:25:12.150224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.044µs"
	I0919 23:25:20.809561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="153.307µs"
	I0919 23:25:25.815134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="99.525µs"
	E0919 23:25:35.690476       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 23:25:36.112624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 23:25:36.815681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.298µs"
	I0919 23:25:40.810468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="91.021µs"
	I0919 23:25:42.464354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.766676ms"
	I0919 23:25:42.465273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="167.688µs"
	I0919 23:25:43.480607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.557651ms"
	I0919 23:25:43.480715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.716µs"
	I0919 23:25:48.811342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.998µs"
	I0919 23:25:51.809249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="79.344µs"
	I0919 23:25:54.665373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.940381ms"
	I0919 23:25:54.665554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.949µs"
	I0919 23:26:03.809422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.613µs"
	I0919 23:26:03.818902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="93.248µs"
	E0919 23:26:05.432859       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:26:05.434071       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:26:05.435264       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:26:05.694895       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 23:26:06.119962       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 23:26:11.703673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.564µs"
	I0919 23:26:11.737577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.072µs"
	
	
	==> kube-controller-manager [d2da53d03680] <==
	E0919 23:20:56.673061       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.673794       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.675299       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.676048       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.405139       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.405140       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.674370       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.675802       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.676323       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.406111       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.406144       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.674482       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.676638       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.676640       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.407306       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.407318       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.674821       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.676948       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.676948       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	I0919 23:24:21.724575       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0919 23:24:21.732070       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-rrcl7"
	I0919 23:24:21.743884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="19.488983ms"
	I0919 23:24:21.756125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="12.181422ms"
	I0919 23:24:21.756238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="67.975µs"
	I0919 23:24:21.757612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="84.823µs"
	
	
	==> kube-proxy [4726cd10ee5e] <==
	I0919 23:25:31.955227       1 server_others.go:69] "Using iptables proxy"
	I0919 23:25:31.966149       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0919 23:25:31.988376       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:25:31.990994       1 server_others.go:152] "Using iptables Proxier"
	I0919 23:25:31.991032       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0919 23:25:31.991040       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0919 23:25:31.991068       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 23:25:31.991351       1 server.go:846] "Version info" version="v1.28.0"
	I0919 23:25:31.991365       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:25:31.992086       1 config.go:97] "Starting endpoint slice config controller"
	I0919 23:25:31.992092       1 config.go:188] "Starting service config controller"
	I0919 23:25:31.992129       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 23:25:31.992133       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 23:25:31.992180       1 config.go:315] "Starting node config controller"
	I0919 23:25:31.992214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 23:25:32.092941       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 23:25:32.092957       1 shared_informer.go:318] Caches are synced for service config
	I0919 23:25:32.092974       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [6f8b342db16b] <==
	E0919 23:25:06.955956       1 run.go:74] "command failed" err="failed complete: too many open files"
	
	
	==> kube-scheduler [037cf7c5d3be] <==
	I0919 23:24:51.417512       1 serving.go:348] Generated self-signed cert in-memory
	W0919 23:24:52.662834       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:24:52.662869       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:24:52.662882       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:24:52.662893       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:24:52.687375       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0919 23:24:52.687404       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:24:52.689700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:24:52.689752       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 23:24:52.691408       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 23:24:52.691594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 23:24:52.790100       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dc91f93ea3d0] <==
	W0919 23:19:51.050936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.050957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.050958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:51.050961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 23:19:51.051051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 23:19:51.876106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 23:19:51.876148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 23:19:51.951405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 23:19:51.951449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 23:19:51.979303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:51.979350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 23:19:51.979365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.979382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 23:19:51.990556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.990608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:52.017451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 23:19:52.017493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 23:19:52.032120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 23:19:52.032165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 23:19:52.238694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:52.238739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:52.240416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 23:19:52.240457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0919 23:19:52.544907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 23:24:22.089209       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 23:26:10 old-k8s-version-359569 kubelet[4622]: I0919 23:26:10.841447    4622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161796527f3afd533a2b4bc534a0cd043e92f1b8775e7f508dc61fe7b5ceed38"
	Sep 19 23:26:10 old-k8s-version-359569 kubelet[4622]: I0919 23:26:10.863924    4622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bdd0c30144387a690b6fe45b9436e6731783b85938e42fc56e12075f27c5266"
	Sep 19 23:26:10 old-k8s-version-359569 kubelet[4622]: E0919 23:26:10.871993    4622 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-old-k8s-version-359569\" already exists" pod="kube-system/kube-controller-manager-old-k8s-version-359569"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.553955    4622 apiserver.go:52] "Watching apiserver"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.571654    4622 topology_manager.go:215] "Topology Admit Handler" podUID="0fafe72c-6f1b-4001-971f-54b044acb1cd" podNamespace="kube-system" podName="coredns-5dd5756b68-q75nl"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.571894    4622 topology_manager.go:215] "Topology Admit Handler" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be" podNamespace="kube-system" podName="kube-proxy-hvp2z"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.572702    4622 topology_manager.go:215] "Topology Admit Handler" podUID="ef0a9cd7-6497-4877-8fc6-286067f0db01" podNamespace="kube-system" podName="storage-provisioner"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.576182    4622 topology_manager.go:215] "Topology Admit Handler" podUID="5b59928a-3af7-4037-882a-de2e0f43bd9c" podNamespace="default" podName="busybox"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.576315    4622 topology_manager.go:215] "Topology Admit Handler" podUID="d09f48f6-888a-467e-b82b-d4847477a8ac" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-rrcl7"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.578673    4622 topology_manager.go:215] "Topology Admit Handler" podUID="79f7fb7f-084e-49c8-89ec-4c532a4ccf19" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-nlr4d"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.578904    4622 topology_manager.go:215] "Topology Admit Handler" podUID="22bbd461-d78b-4aa2-8860-9b8628063030" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-ddwj8"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.600690    4622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be-lib-modules\") pod \"kube-proxy-hvp2z\" (UID: \"8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be\") " pod="kube-system/kube-proxy-hvp2z"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.600775    4622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef0a9cd7-6497-4877-8fc6-286067f0db01-tmp\") pod \"storage-provisioner\" (UID: \"ef0a9cd7-6497-4877-8fc6-286067f0db01\") " pod="kube-system/storage-provisioner"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.600816    4622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be-xtables-lock\") pod \"kube-proxy-hvp2z\" (UID: \"8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be\") " pod="kube-system/kube-proxy-hvp2z"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.676889    4622 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: E0919 23:26:11.892529    4622 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-old-k8s-version-359569\" already exists" pod="kube-system/kube-controller-manager-old-k8s-version-359569"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: E0919 23:26:11.897134    4622 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-old-k8s-version-359569\" already exists" pod="kube-system/kube-scheduler-old-k8s-version-359569"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.043095    4622 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.043220    4622 kuberuntime_image.go:53] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.043890    4622 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ng9kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-ddwj8_kubernetes-dashboard(22bbd461-d78b-4aa2-8860-9b8628063030): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.044539    4622 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ddwj8" podUID="22bbd461-d78b-4aa2-8860-9b8628063030"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.119744    4622 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.119831    4622 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.120182    4622 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lxnqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rrcl7_kube-system(d09f48f6-888a-467e-b82b-d4847477a8ac): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.120280    4622 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rrcl7" podUID="d09f48f6-888a-467e-b82b-d4847477a8ac"
	
	
	==> kubernetes-dashboard [12c2167f10bd] <==
	2025/09/19 23:25:11 Using namespace: kubernetes-dashboard
	2025/09/19 23:25:11 Using in-cluster config to connect to apiserver
	2025/09/19 23:25:11 Using secret token for csrf signing
	2025/09/19 23:25:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:25:11 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00059fae8)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00043c100)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19aba3a?)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:96 +0x1cf
	
	
	==> kubernetes-dashboard [a2d9e9d5fed8] <==
	2025/09/19 23:25:42 Starting overwatch
	2025/09/19 23:25:42 Using namespace: kubernetes-dashboard
	2025/09/19 23:25:42 Using in-cluster config to connect to apiserver
	2025/09/19 23:25:42 Using secret token for csrf signing
	2025/09/19 23:25:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:25:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:25:42 Successful initial request to the apiserver, version: v1.28.0
	2025/09/19 23:25:42 Generating JWE encryption key
	2025/09/19 23:25:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:25:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:25:42 Initializing JWE encryption key from synchronized object
	2025/09/19 23:25:42 Creating in-cluster Sidecar client
	2025/09/19 23:25:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:25:42 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [63c04f051791] <==
	I0919 23:24:53.449631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:25:23.454057       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9fa04b5a65fb] <==
	I0919 23:25:38.906056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:25:38.916294       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:25:38.916357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 23:25:38.928041       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:25:38.928241       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee19f75f-e3bd-4f8c-a05f-4be3ebc50a28", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-359569_08c6bfbb-08f9-4132-a1a5-178eeec673f8 became leader
	I0919 23:25:38.928308       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-359569_08c6bfbb-08f9-4132-a1a5-178eeec673f8!
	I0919 23:25:39.028710       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-359569_08c6bfbb-08f9-4132-a1a5-178eeec673f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-359569 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-359569 describe pod metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-359569 describe pod metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8: exit status 1 (59.395824ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rrcl7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-ddwj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-359569 describe pod metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-359569
helpers_test.go:243: (dbg) docker inspect old-k8s-version-359569:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05",
	        "Created": "2025-09-19T23:19:37.347852462Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 661119,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:24:33.056461271Z",
	            "FinishedAt": "2025-09-19T23:24:32.248759953Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/hosts",
	        "LogPath": "/var/lib/docker/containers/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05/1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05-json.log",
	        "Name": "/old-k8s-version-359569",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-359569:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-359569",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ae574ad604d3f137b0b7c0e0640afdc73e087b424fa17828d6583fd2ba79f05",
	                "LowerDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2-init/diff:/var/lib/docker/overlay2/9d2e369e5d97e1c9099e0626e9d6e97dbea1f066bb5f1a75d4701fbdb3248b63/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18ea4b2e3c2762c068d0dc5265069b59364ab6f42301149f86a9f12790b934e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-359569",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-359569/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-359569",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-359569",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-359569",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e88a76e3335fbecc57e02a3ae7db909ac48d0ae49aae9e7c2d5f0fa5cd07467",
	            "SandboxKey": "/var/run/docker/netns/3e88a76e3335",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-359569": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:dd:97:d1:85:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e1de8892b98e15a33d7c5eadc8f8aa4724fe6ba0a68c7bcaff3b9263e169c715",
	                    "EndpointID": "6dae4efcd74c9ce200d72a471c4968e75305f016140ccacfe8d3059354c0e548",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-359569",
	                        "1ae574ad604d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359569 -n old-k8s-version-359569
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359569 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-359569 logs -n 25: (1.314560208s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ delete  │ -p disable-driver-mounts-481061                                                                                                                                                                                                                 │ disable-driver-mounts-481061 │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-359569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ stop    │ -p old-k8s-version-359569 --alsologtostderr -v=3                                                                                                                                                                                                │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-359569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p no-preload-834234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ stop    │ -p no-preload-834234 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable dashboard -p no-preload-834234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ start   │ -p no-preload-834234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p embed-certs-253767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ stop    │ -p embed-certs-253767 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-485703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ stop    │ -p default-k8s-diff-port-485703 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable dashboard -p embed-certs-253767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ start   │ -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-253767           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-485703 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ start   │ -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-485703 │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │                     │
	│ image   │ old-k8s-version-359569 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ pause   │ -p old-k8s-version-359569 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ unpause │ -p old-k8s-version-359569 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-359569       │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ image   │ no-preload-834234 image list --format=json                                                                                                                                                                                                      │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ pause   │ -p no-preload-834234 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ unpause │ -p no-preload-834234 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │ 19 Sep 25 23:26 UTC │
	│ delete  │ -p no-preload-834234                                                                                                                                                                                                                            │ no-preload-834234            │ jenkins │ v1.37.0 │ 19 Sep 25 23:26 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:25:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:25:29.944138  674837 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:25:29.944250  674837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:25:29.944258  674837 out.go:374] Setting ErrFile to fd 2...
	I0919 23:25:29.944264  674837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:25:29.944514  674837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:25:29.944978  674837 out.go:368] Setting JSON to false
	I0919 23:25:29.946216  674837 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7666,"bootTime":1758316664,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:25:29.946322  674837 start.go:140] virtualization: kvm guest
	I0919 23:25:29.948459  674837 out.go:179] * [default-k8s-diff-port-485703] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:25:29.949978  674837 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:25:29.950016  674837 notify.go:220] Checking for updates...
	I0919 23:25:29.952179  674837 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:25:29.953222  674837 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:29.954203  674837 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 23:25:29.955131  674837 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:25:29.956129  674837 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:25:29.957807  674837 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:29.958552  674837 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:25:29.984101  674837 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:25:29.984220  674837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:25:30.039323  674837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:25:30.027980594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:25:30.039477  674837 docker.go:318] overlay module found
	I0919 23:25:30.041158  674837 out.go:179] * Using the docker driver based on existing profile
	I0919 23:25:30.042269  674837 start.go:304] selected driver: docker
	I0919 23:25:30.042286  674837 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:30.042399  674837 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:25:30.043110  674837 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:25:30.102993  674837 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:25:30.092119612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:25:30.103297  674837 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:25:30.103327  674837 cni.go:84] Creating CNI manager for ""
	I0919 23:25:30.103387  674837 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:25:30.103426  674837 start.go:348] cluster config:
	{Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:30.105302  674837 out.go:179] * Starting "default-k8s-diff-port-485703" primary control-plane node in "default-k8s-diff-port-485703" cluster
	I0919 23:25:30.106292  674837 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 23:25:30.107241  674837 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:25:30.108113  674837 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:25:30.108141  674837 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:25:30.108159  674837 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 23:25:30.108182  674837 cache.go:58] Caching tarball of preloaded images
	I0919 23:25:30.108327  674837 preload.go:172] Found /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 23:25:30.108350  674837 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0919 23:25:30.108539  674837 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json ...
	I0919 23:25:30.127982  674837 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:25:30.128001  674837 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:25:30.128018  674837 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:25:30.128046  674837 start.go:360] acquireMachinesLock for default-k8s-diff-port-485703: {Name:mk6951b47a07a3f8003f766143829366ba3d9245 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:25:30.128110  674837 start.go:364] duration metric: took 40.216µs to acquireMachinesLock for "default-k8s-diff-port-485703"
	I0919 23:25:30.128133  674837 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:25:30.128142  674837 fix.go:54] fixHost starting: 
	I0919 23:25:30.128356  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:30.147490  674837 fix.go:112] recreateIfNeeded on default-k8s-diff-port-485703: state=Stopped err=<nil>
	W0919 23:25:30.147539  674837 fix.go:138] unexpected machine state, will restart: <nil>
	W0919 23:25:26.223906  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:28.721831  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:26.205917  673615 out.go:252] * Restarting existing docker container for "embed-certs-253767" ...
	I0919 23:25:26.205998  673615 cli_runner.go:164] Run: docker start embed-certs-253767
	I0919 23:25:26.479850  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:26.501321  673615 kic.go:430] container "embed-certs-253767" state is running.
	I0919 23:25:26.501793  673615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:25:26.523190  673615 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/config.json ...
	I0919 23:25:26.523458  673615 machine.go:93] provisionDockerMachine start ...
	I0919 23:25:26.523555  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:26.544548  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:26.544902  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:26.544920  673615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:25:26.545682  673615 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43034->127.0.0.1:33158: read: connection reset by peer
	I0919 23:25:29.684602  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253767
	
	I0919 23:25:29.684646  673615 ubuntu.go:182] provisioning hostname "embed-certs-253767"
	I0919 23:25:29.684721  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:29.703720  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:29.703921  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:29.703934  673615 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-253767 && echo "embed-certs-253767" | sudo tee /etc/hostname
	I0919 23:25:29.871799  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-253767
	
	I0919 23:25:29.871865  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:29.890816  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:29.891092  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:29.891122  673615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-253767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-253767/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-253767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:25:30.033720  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:30.033769  673615 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:25:30.033797  673615 ubuntu.go:190] setting up certificates
	I0919 23:25:30.033811  673615 provision.go:84] configureAuth start
	I0919 23:25:30.033872  673615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:25:30.052684  673615 provision.go:143] copyHostCerts
	I0919 23:25:30.052755  673615 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:25:30.052778  673615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:25:30.052863  673615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:25:30.053044  673615 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:25:30.053057  673615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:25:30.053097  673615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:25:30.053198  673615 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:25:30.053209  673615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:25:30.053244  673615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:25:30.053332  673615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.embed-certs-253767 san=[127.0.0.1 192.168.94.2 embed-certs-253767 localhost minikube]
	I0919 23:25:30.234528  673615 provision.go:177] copyRemoteCerts
	I0919 23:25:30.234605  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:25:30.234674  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.257631  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:30.361350  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 23:25:30.389697  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:25:30.419544  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:25:30.448178  673615 provision.go:87] duration metric: took 414.351604ms to configureAuth
	I0919 23:25:30.448208  673615 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:25:30.448371  673615 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:30.448415  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.465572  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.465792  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:30.465803  673615 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:25:30.605066  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:25:30.605084  673615 ubuntu.go:71] root file system type: overlay
	I0919 23:25:30.605209  673615 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:25:30.605265  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.631307  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.631653  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:30.631765  673615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:25:30.798756  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:25:30.798841  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.818912  673615 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.819493  673615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I0919 23:25:30.819547  673615 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	W0919 23:25:28.194661  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:30.195559  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:32.195926  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:30.962591  673615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:30.962620  673615 machine.go:96] duration metric: took 4.439145746s to provisionDockerMachine
	I0919 23:25:30.962631  673615 start.go:293] postStartSetup for "embed-certs-253767" (driver="docker")
	I0919 23:25:30.962641  673615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:25:30.962702  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:25:30.962739  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:30.980604  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.077895  673615 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:25:31.081585  673615 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:25:31.081614  673615 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:25:31.081622  673615 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:25:31.081629  673615 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:25:31.081640  673615 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:25:31.081704  673615 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:25:31.081818  673615 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:25:31.081915  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:25:31.092920  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:31.119832  673615 start.go:296] duration metric: took 157.182424ms for postStartSetup
	I0919 23:25:31.119919  673615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:25:31.119957  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:31.138223  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.231108  673615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:25:31.235803  673615 fix.go:56] duration metric: took 5.057464858s for fixHost
	I0919 23:25:31.235827  673615 start.go:83] releasing machines lock for "embed-certs-253767", held for 5.057518817s
	I0919 23:25:31.235899  673615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-253767
	I0919 23:25:31.253706  673615 ssh_runner.go:195] Run: cat /version.json
	I0919 23:25:31.253762  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:31.253779  673615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:25:31.253846  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:31.273065  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.273279  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:31.438358  673615 ssh_runner.go:195] Run: systemctl --version
	I0919 23:25:31.443355  673615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:25:31.448118  673615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:25:31.467887  673615 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:25:31.467963  673615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:25:31.477879  673615 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:25:31.477911  673615 start.go:495] detecting cgroup driver to use...
	I0919 23:25:31.477948  673615 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:31.478067  673615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:31.495402  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:25:31.505927  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:25:31.516280  673615 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:25:31.516348  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:25:31.526965  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:31.537331  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:25:31.547987  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:31.558586  673615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:25:31.568224  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:25:31.578655  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:25:31.589139  673615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:25:31.599764  673615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:25:31.608667  673615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:25:31.617805  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:31.687545  673615 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:25:31.770333  673615 start.go:495] detecting cgroup driver to use...
	I0919 23:25:31.770382  673615 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:31.770426  673615 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:25:31.783922  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:31.796341  673615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:25:31.819064  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:31.833576  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:25:31.848452  673615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:31.868832  673615 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:25:31.872957  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:25:31.883296  673615 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:25:31.903423  673615 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:25:31.988302  673615 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:25:32.061857  673615 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:25:32.061989  673615 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:25:32.082566  673615 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:25:32.095079  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:32.167618  673615 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:25:33.003375  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:25:33.015216  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:25:33.026452  673615 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 23:25:33.038895  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:33.049653  673615 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:25:33.117398  673615 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:25:33.188911  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:33.264735  673615 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:25:33.286402  673615 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:25:33.297129  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:33.365641  673615 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:25:33.441273  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:33.454018  673615 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:25:33.454071  673615 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:25:33.457926  673615 start.go:563] Will wait 60s for crictl version
	I0919 23:25:33.457976  673615 ssh_runner.go:195] Run: which crictl
	I0919 23:25:33.461550  673615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:25:33.497887  673615 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:25:33.497957  673615 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:33.525153  673615 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:33.552270  673615 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:25:33.552361  673615 cli_runner.go:164] Run: docker network inspect embed-certs-253767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:25:33.569486  673615 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0919 23:25:33.573408  673615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:33.585675  673615 kubeadm.go:875] updating cluster {Name:embed-certs-253767 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:25:33.585819  673615 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:25:33.585885  673615 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:33.609143  673615 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:33.609163  673615 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:25:33.609218  673615 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:33.629836  673615 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:33.629860  673615 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:25:33.629873  673615 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0919 23:25:33.629982  673615 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-253767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:25:33.630118  673615 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:25:33.680444  673615 cni.go:84] Creating CNI manager for ""
	I0919 23:25:33.680484  673615 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:25:33.680510  673615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:25:33.680537  673615 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-253767 NodeName:embed-certs-253767 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:25:33.680698  673615 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-253767"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:25:33.680771  673615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:25:33.690801  673615 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:25:33.690867  673615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:25:33.700842  673615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0919 23:25:33.719299  673615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:25:33.737940  673615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0919 23:25:33.756671  673615 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:25:33.760381  673615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:33.773375  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:33.841712  673615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:33.864983  673615 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767 for IP: 192.168.94.2
	I0919 23:25:33.865005  673615 certs.go:194] generating shared ca certs ...
	I0919 23:25:33.865024  673615 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:33.865198  673615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:25:33.865256  673615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:25:33.865269  673615 certs.go:256] generating profile certs ...
	I0919 23:25:33.865411  673615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/client.key
	I0919 23:25:33.865483  673615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key.590657ca
	I0919 23:25:33.865555  673615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key
	I0919 23:25:33.865698  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:25:33.865739  673615 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:25:33.865749  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:25:33.865781  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:25:33.865813  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:25:33.865841  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:25:33.865899  673615 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:33.866723  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:25:33.892712  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:25:33.920169  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:25:33.957470  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:25:33.991717  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0919 23:25:34.022657  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:25:34.047553  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:25:34.071680  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/embed-certs-253767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 23:25:34.104406  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:25:34.137052  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:25:34.166651  673615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:25:34.197156  673615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:25:34.218328  673615 ssh_runner.go:195] Run: openssl version
	I0919 23:25:34.225260  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:25:34.236384  673615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:34.240556  673615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:34.240707  673615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:34.248711  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:25:34.258472  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:25:34.268343  673615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:25:34.271889  673615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:25:34.271940  673615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:25:34.279086  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:25:34.288830  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:25:34.299196  673615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:25:34.302981  673615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:25:34.303036  673615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:25:34.310231  673615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:25:34.319230  673615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:25:34.322686  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:25:34.329163  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:25:34.335396  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:25:34.341948  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:25:34.348461  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:25:34.356117  673615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:25:34.362408  673615 kubeadm.go:392] StartCluster: {Name:embed-certs-253767 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-253767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:34.362564  673615 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:25:34.381704  673615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:25:34.391242  673615 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:25:34.391258  673615 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:25:34.391300  673615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:25:34.401755  673615 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:25:34.402708  673615 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-253767" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:34.403198  673615 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-253767" cluster setting kubeconfig missing "embed-certs-253767" context setting]
	I0919 23:25:34.403987  673615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:34.406026  673615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:25:34.417779  673615 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0919 23:25:34.417814  673615 kubeadm.go:593] duration metric: took 26.549362ms to restartPrimaryControlPlane
	I0919 23:25:34.417826  673615 kubeadm.go:394] duration metric: took 55.428161ms to StartCluster
	I0919 23:25:34.417844  673615 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:34.417945  673615 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:34.419387  673615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:34.419640  673615 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:25:34.419725  673615 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:25:34.419833  673615 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-253767"
	I0919 23:25:34.419854  673615 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-253767"
	I0919 23:25:34.419852  673615 config.go:182] Loaded profile config "embed-certs-253767": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	W0919 23:25:34.419863  673615 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:25:34.419894  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.419903  673615 addons.go:69] Setting default-storageclass=true in profile "embed-certs-253767"
	I0919 23:25:34.419921  673615 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-253767"
	I0919 23:25:34.419949  673615 addons.go:69] Setting metrics-server=true in profile "embed-certs-253767"
	I0919 23:25:34.419979  673615 addons.go:238] Setting addon metrics-server=true in "embed-certs-253767"
	W0919 23:25:34.419988  673615 addons.go:247] addon metrics-server should already be in state true
	I0919 23:25:34.420062  673615 addons.go:69] Setting dashboard=true in profile "embed-certs-253767"
	I0919 23:25:34.420083  673615 addons.go:238] Setting addon dashboard=true in "embed-certs-253767"
	W0919 23:25:34.420091  673615 addons.go:247] addon dashboard should already be in state true
	I0919 23:25:34.420123  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.420233  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.420391  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.420605  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.420712  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.421471  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.421733  673615 out.go:179] * Verifying Kubernetes components...
	I0919 23:25:34.424787  673615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:34.457957  673615 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:25:34.458049  673615 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:25:34.460043  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:25:34.460071  673615 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:25:34.460160  673615 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:34.460181  673615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:25:34.460238  673615 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:25:34.460331  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.460394  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.462809  673615 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:25:30.149200  674837 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-485703" ...
	I0919 23:25:30.149273  674837 cli_runner.go:164] Run: docker start default-k8s-diff-port-485703
	I0919 23:25:30.387349  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:30.407247  674837 kic.go:430] container "default-k8s-diff-port-485703" state is running.
	I0919 23:25:30.407676  674837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:25:30.426594  674837 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/config.json ...
	I0919 23:25:30.426854  674837 machine.go:93] provisionDockerMachine start ...
	I0919 23:25:30.426928  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:30.446414  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:30.446782  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:30.446804  674837 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:25:30.447602  674837 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58516->127.0.0.1:33164: read: connection reset by peer
	I0919 23:25:33.587482  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485703
	
	I0919 23:25:33.587546  674837 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-485703"
	I0919 23:25:33.587610  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:33.607632  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:33.607911  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:33.607927  674837 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-485703 && echo "default-k8s-diff-port-485703" | sudo tee /etc/hostname
	I0919 23:25:33.755479  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-485703
	
	I0919 23:25:33.755591  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:33.774588  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:33.774800  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:33.774817  674837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-485703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-485703/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-485703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:25:33.912238  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:33.912266  674837 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-142711/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-142711/.minikube}
	I0919 23:25:33.912286  674837 ubuntu.go:190] setting up certificates
	I0919 23:25:33.912297  674837 provision.go:84] configureAuth start
	I0919 23:25:33.912358  674837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:25:33.937853  674837 provision.go:143] copyHostCerts
	I0919 23:25:33.937916  674837 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem, removing ...
	I0919 23:25:33.937934  674837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem
	I0919 23:25:33.938004  674837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/cert.pem (1123 bytes)
	I0919 23:25:33.938149  674837 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem, removing ...
	I0919 23:25:33.938166  674837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem
	I0919 23:25:33.938212  674837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/key.pem (1675 bytes)
	I0919 23:25:33.938321  674837 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem, removing ...
	I0919 23:25:33.938334  674837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem
	I0919 23:25:33.938376  674837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-142711/.minikube/ca.pem (1078 bytes)
	I0919 23:25:33.938493  674837 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-485703 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-485703 localhost minikube]
	I0919 23:25:34.741723  674837 provision.go:177] copyRemoteCerts
	I0919 23:25:34.741806  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:25:34.741862  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:34.768145  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:34.879088  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:25:34.915427  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	W0919 23:25:30.722158  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:33.222613  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:34.463964  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:25:34.463985  673615 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:25:34.464327  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.467526  673615 addons.go:238] Setting addon default-storageclass=true in "embed-certs-253767"
	W0919 23:25:34.467550  673615 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:25:34.467580  673615 host.go:66] Checking if "embed-certs-253767" exists ...
	I0919 23:25:34.470054  673615 cli_runner.go:164] Run: docker container inspect embed-certs-253767 --format={{.State.Status}}
	I0919 23:25:34.502904  673615 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:34.502928  673615 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:25:34.502997  673615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-253767
	I0919 23:25:34.503508  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.509839  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.513067  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.533679  673615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/embed-certs-253767/id_rsa Username:docker}
	I0919 23:25:34.575047  673615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:34.593003  673615 node_ready.go:35] waiting up to 6m0s for node "embed-certs-253767" to be "Ready" ...
	I0919 23:25:34.656058  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:25:34.656090  673615 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:25:34.662778  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:34.663632  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:25:34.663656  673615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:25:34.677738  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:34.699318  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:25:34.699355  673615 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:25:34.704895  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:25:34.704922  673615 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:25:34.745554  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:25:34.745607  673615 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:25:34.746344  673615 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:34.746368  673615 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0919 23:25:34.773324  673615 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.773377  673615 retry.go:31] will retry after 147.461987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.780583  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:25:34.780610  673615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:25:34.781970  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:34.790336  673615 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.790483  673615 retry.go:31] will retry after 355.110169ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.814777  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:25:34.814817  673615 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:25:34.841413  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:25:34.841449  673615 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0919 23:25:34.871187  673615 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.871221  673615 retry.go:31] will retry after 154.367143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:34.874274  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:25:34.874300  673615 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:25:34.901847  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:25:34.901892  673615 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:25:34.920987  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:34.939454  673615 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:34.939529  673615 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:25:34.975758  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:35.026033  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:35.146294  673615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:36.784314  673615 node_ready.go:49] node "embed-certs-253767" is "Ready"
	I0919 23:25:36.784347  673615 node_ready.go:38] duration metric: took 2.191280252s for node "embed-certs-253767" to be "Ready" ...
	I0919 23:25:36.784369  673615 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:25:36.784434  673615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:25:37.558995  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.63795515s)
	I0919 23:25:37.559336  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.583525608s)
	I0919 23:25:37.559403  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.533344282s)
	I0919 23:25:37.559744  673615 addons.go:479] Verifying addon metrics-server=true in "embed-certs-253767"
	I0919 23:25:37.559431  673615 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.41311586s)
	I0919 23:25:37.559477  673615 api_server.go:72] duration metric: took 3.139802295s to wait for apiserver process to appear ...
	I0919 23:25:37.560135  673615 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:25:37.560158  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:37.561307  673615 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-253767 addons enable metrics-server
	
	I0919 23:25:37.569215  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:37.569267  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:37.577309  673615 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	W0919 23:25:34.196181  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:36.199965  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:34.957009  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:25:34.993954  674837 provision.go:87] duration metric: took 1.081535884s to configureAuth
	I0919 23:25:34.993994  674837 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:25:34.994237  674837 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:34.994309  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.017357  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:35.017635  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:35.017653  674837 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 23:25:35.168288  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0919 23:25:35.168314  674837 ubuntu.go:71] root file system type: overlay
	I0919 23:25:35.168467  674837 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 23:25:35.168608  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.193679  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:35.193981  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:35.194082  674837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 23:25:35.347805  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 23:25:35.347899  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.368801  674837 main.go:141] libmachine: Using SSH client type: native
	I0919 23:25:35.369117  674837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0919 23:25:35.369144  674837 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 23:25:35.516679  674837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:25:35.516718  674837 machine.go:96] duration metric: took 5.089846476s to provisionDockerMachine
	I0919 23:25:35.516732  674837 start.go:293] postStartSetup for "default-k8s-diff-port-485703" (driver="docker")
	I0919 23:25:35.516746  674837 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:25:35.516829  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:25:35.516873  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.536305  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:35.635604  674837 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:25:35.639133  674837 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:25:35.639160  674837 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:25:35.639168  674837 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:25:35.639174  674837 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:25:35.639184  674837 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/addons for local assets ...
	I0919 23:25:35.639227  674837 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-142711/.minikube/files for local assets ...
	I0919 23:25:35.639305  674837 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem -> 1463352.pem in /etc/ssl/certs
	I0919 23:25:35.639411  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:25:35.648457  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:35.672674  674837 start.go:296] duration metric: took 155.926949ms for postStartSetup
	I0919 23:25:35.672754  674837 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:25:35.672822  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.697684  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:35.793585  674837 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:25:35.799221  674837 fix.go:56] duration metric: took 5.671063757s for fixHost
	I0919 23:25:35.799275  674837 start.go:83] releasing machines lock for "default-k8s-diff-port-485703", held for 5.671149761s
	I0919 23:25:35.799358  674837 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-485703
	I0919 23:25:35.819941  674837 ssh_runner.go:195] Run: cat /version.json
	I0919 23:25:35.819971  674837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:25:35.820005  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.820067  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:35.843007  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:35.843513  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:36.030660  674837 ssh_runner.go:195] Run: systemctl --version
	I0919 23:25:36.037062  674837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:25:36.042649  674837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0919 23:25:36.067796  674837 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:25:36.067888  674837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:25:36.079597  674837 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:25:36.079627  674837 start.go:495] detecting cgroup driver to use...
	I0919 23:25:36.079665  674837 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:36.079778  674837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:36.100659  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0919 23:25:36.112782  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 23:25:36.124598  674837 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0919 23:25:36.124656  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0919 23:25:36.136320  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:36.147880  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 23:25:36.159385  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 23:25:36.170114  674837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:25:36.181177  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 23:25:36.194719  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0919 23:25:36.207625  674837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0919 23:25:36.219742  674837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:25:36.231890  674837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:25:36.243222  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:36.322441  674837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 23:25:36.408490  674837 start.go:495] detecting cgroup driver to use...
	I0919 23:25:36.408595  674837 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:25:36.408653  674837 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 23:25:36.421578  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:36.433325  674837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:25:36.457353  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:25:36.471564  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 23:25:36.483483  674837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:25:36.502383  674837 ssh_runner.go:195] Run: which cri-dockerd
	I0919 23:25:36.506157  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 23:25:36.515116  674837 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0919 23:25:36.533279  674837 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 23:25:36.609251  674837 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 23:25:36.695276  674837 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0919 23:25:36.695408  674837 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0919 23:25:36.721394  674837 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0919 23:25:36.737767  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:36.834649  674837 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 23:25:37.752406  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:25:37.767052  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0919 23:25:37.783206  674837 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0919 23:25:37.800742  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:37.815579  674837 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 23:25:37.897776  674837 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 23:25:37.983340  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:38.052677  674837 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 23:25:38.080946  674837 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0919 23:25:38.093264  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:38.180601  674837 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0919 23:25:38.264844  674837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0919 23:25:38.276756  674837 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 23:25:38.276811  674837 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 23:25:38.280626  674837 start.go:563] Will wait 60s for crictl version
	I0919 23:25:38.280672  674837 ssh_runner.go:195] Run: which crictl
	I0919 23:25:38.284150  674837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:25:38.318450  674837 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0919 23:25:38.318532  674837 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:38.342018  674837 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 23:25:38.367928  674837 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0919 23:25:38.368004  674837 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-485703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:25:38.384188  674837 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0919 23:25:38.388539  674837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:38.400834  674837 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:25:38.400954  674837 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 23:25:38.401002  674837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:38.420701  674837 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:38.420718  674837 docker.go:621] Images already preloaded, skipping extraction
	I0919 23:25:38.420760  674837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 23:25:38.440928  674837 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 23:25:38.440953  674837 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:25:38.440965  674837 kubeadm.go:926] updating node { 192.168.85.2 8444 v1.34.0 docker true true} ...
	I0919 23:25:38.441093  674837 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-485703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:25:38.441158  674837 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 23:25:38.492344  674837 cni.go:84] Creating CNI manager for ""
	I0919 23:25:38.492389  674837 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 23:25:38.492405  674837 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:25:38.492437  674837 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-485703 NodeName:default-k8s-diff-port-485703 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:25:38.492599  674837 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-485703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:25:38.492667  674837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:25:38.502838  674837 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:25:38.502912  674837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:25:38.512555  674837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0919 23:25:38.530876  674837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:25:38.550088  674837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0919 23:25:38.570120  674837 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:25:38.573852  674837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:25:38.585399  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:38.653254  674837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:38.676776  674837 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703 for IP: 192.168.85.2
	I0919 23:25:38.676801  674837 certs.go:194] generating shared ca certs ...
	I0919 23:25:38.676822  674837 certs.go:226] acquiring lock for ca certs: {Name:mkc5df652d6204fd8687dfaaf83b02c6e10b58b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:38.677046  674837 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key
	I0919 23:25:38.677103  674837 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key
	I0919 23:25:38.677118  674837 certs.go:256] generating profile certs ...
	I0919 23:25:38.677231  674837 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/client.key
	I0919 23:25:38.677309  674837 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key.66b5ce16
	I0919 23:25:38.677358  674837 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key
	I0919 23:25:38.677493  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem (1338 bytes)
	W0919 23:25:38.677626  674837 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335_empty.pem, impossibly tiny 0 bytes
	I0919 23:25:38.677642  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:25:38.677676  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:25:38.677719  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:25:38.677751  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/certs/key.pem (1675 bytes)
	I0919 23:25:38.677808  674837 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem (1708 bytes)
	I0919 23:25:38.678394  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:25:38.705947  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 23:25:38.734669  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:25:38.764863  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:25:38.790776  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:25:38.819841  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 23:25:38.848339  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:25:38.876786  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/default-k8s-diff-port-485703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:25:38.905308  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/certs/146335.pem --> /usr/share/ca-certificates/146335.pem (1338 bytes)
	I0919 23:25:38.938353  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/ssl/certs/1463352.pem --> /usr/share/ca-certificates/1463352.pem (1708 bytes)
	I0919 23:25:38.963603  674837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:25:38.988188  674837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:25:39.006831  674837 ssh_runner.go:195] Run: openssl version
	I0919 23:25:39.012361  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/146335.pem && ln -fs /usr/share/ca-certificates/146335.pem /etc/ssl/certs/146335.pem"
	I0919 23:25:39.022912  674837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/146335.pem
	I0919 23:25:39.026423  674837 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/146335.pem
	I0919 23:25:39.026475  674837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/146335.pem
	I0919 23:25:39.033152  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/146335.pem /etc/ssl/certs/51391683.0"
	I0919 23:25:39.042171  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1463352.pem && ln -fs /usr/share/ca-certificates/1463352.pem /etc/ssl/certs/1463352.pem"
	I0919 23:25:39.051651  674837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1463352.pem
	I0919 23:25:39.055019  674837 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/1463352.pem
	I0919 23:25:39.055065  674837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1463352.pem
	I0919 23:25:39.061797  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1463352.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:25:39.070800  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:25:39.079941  674837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:39.083840  674837 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:15 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:39.083886  674837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:25:39.091469  674837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:25:39.101098  674837 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:25:39.104980  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:25:39.113163  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:25:39.120982  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:25:39.128712  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:25:39.136485  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:25:39.144048  674837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:25:39.152250  674837 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-485703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-485703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:25:39.152413  674837 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 23:25:39.176697  674837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:25:39.189840  674837 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:25:39.189863  674837 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:25:39.189915  674837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:25:39.201780  674837 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:25:39.202680  674837 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-485703" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:39.203231  674837 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-142711/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-485703" cluster setting kubeconfig missing "default-k8s-diff-port-485703" context setting]
	I0919 23:25:39.204610  674837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:39.206748  674837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:25:39.218084  674837 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.85.2
	I0919 23:25:39.218123  674837 kubeadm.go:593] duration metric: took 28.253396ms to restartPrimaryControlPlane
	I0919 23:25:39.218136  674837 kubeadm.go:394] duration metric: took 65.898139ms to StartCluster
	I0919 23:25:39.218159  674837 settings.go:142] acquiring lock: {Name:mk0ff94a55db11c0f045ab7f983bc46c653527ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:39.218253  674837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 23:25:39.220609  674837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-142711/kubeconfig: {Name:mk4ed26fa289682c072e02c721ecb5e9a371ed27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:25:39.220908  674837 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 23:25:39.221038  674837 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:25:39.221175  674837 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221196  674837 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221211  674837 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221122  674837 config.go:182] Loaded profile config "default-k8s-diff-port-485703": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:25:39.221230  674837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-485703"
	I0919 23:25:39.221238  674837 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-485703"
	I0919 23:25:39.221250  674837 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-485703"
	W0919 23:25:39.221258  674837 addons.go:247] addon metrics-server should already be in state true
	I0919 23:25:39.221296  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.221204  674837 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-485703"
	I0919 23:25:39.221231  674837 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-485703"
	W0919 23:25:39.221474  674837 addons.go:247] addon dashboard should already be in state true
	I0919 23:25:39.221551  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.221638  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.221809  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.222010  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	W0919 23:25:39.222223  674837 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:25:39.222293  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.222837  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.224637  674837 out.go:179] * Verifying Kubernetes components...
	I0919 23:25:39.225647  674837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:25:39.247508  674837 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-485703"
	W0919 23:25:39.247537  674837 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:25:39.247576  674837 host.go:66] Checking if "default-k8s-diff-port-485703" exists ...
	I0919 23:25:39.248037  674837 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-485703 --format={{.State.Status}}
	I0919 23:25:39.248052  674837 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:25:39.249030  674837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:25:39.249034  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:25:39.249153  674837 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:25:39.249218  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.252333  674837 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:25:39.252395  674837 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:39.252420  674837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:25:39.252486  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.254741  674837 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:25:39.255724  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:25:39.255746  674837 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:25:39.255806  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.279725  674837 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:39.279754  674837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:25:39.279823  674837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-485703
	I0919 23:25:39.280152  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.284188  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.285454  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.304801  674837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/default-k8s-diff-port-485703/id_rsa Username:docker}
	I0919 23:25:39.330415  674837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:25:39.398772  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:39.398991  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:25:39.399017  674837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:25:39.406228  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:25:39.406251  674837 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:25:39.421926  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:39.423991  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:25:39.424015  674837 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:25:39.430092  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:25:39.430116  674837 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:25:39.447109  674837 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:39.447139  674837 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:25:39.450743  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:25:39.450767  674837 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:25:39.451228  674837 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-485703" to be "Ready" ...
	I0919 23:25:39.472998  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:39.474109  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:25:39.474133  674837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0919 23:25:39.475405  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.475446  674837 retry.go:31] will retry after 347.691221ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.494589  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.494666  674837 retry.go:31] will retry after 347.211429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.497652  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:25:39.497699  674837 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:25:39.522536  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:25:39.522584  674837 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:25:39.546586  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:25:39.546617  674837 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0919 23:25:39.548384  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.548412  674837 retry.go:31] will retry after 294.337604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.566012  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:25:39.566030  674837 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:25:39.584679  674837 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:39.584705  674837 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:25:39.603131  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:39.659084  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.659126  674837 retry.go:31] will retry after 170.3526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.824257  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:39.829607  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:25:39.842950  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:39.842945  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:39.894925  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.894980  674837 retry.go:31] will retry after 225.409439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.897913  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.897964  674837 retry.go:31] will retry after 505.694132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.913611  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.913645  674837 retry.go:31] will retry after 376.475703ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:39.913676  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:39.913710  674837 retry.go:31] will retry after 258.235731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:35.721926  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:37.722686  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:40.222519  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:37.578970  673615 addons.go:514] duration metric: took 3.159245865s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0919 23:25:38.060882  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:38.066247  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:38.066275  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:38.560739  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:38.565272  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:38.565303  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:39.060625  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:39.065273  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:39.065302  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:39.560637  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:39.564973  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:39.565015  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:40.060260  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:40.064604  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:40.064631  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:40.560227  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:40.564386  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:40.564408  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:38.695125  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:41.195288  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:40.120637  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:40.172638  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:40.175640  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.175673  674837 retry.go:31] will retry after 463.562458ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:40.233280  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.233322  674837 retry.go:31] will retry after 709.868249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.290287  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:40.346042  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.346070  674837 retry.go:31] will retry after 596.08637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.404268  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:40.464443  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.464480  674837 retry.go:31] will retry after 520.136858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.639715  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:40.695405  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.695451  674837 retry.go:31] will retry after 445.187627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:40.942744  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:25:40.943313  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:25:40.985184  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:41.014753  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.014794  674837 retry.go:31] will retry after 940.601778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:41.014794  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.014824  674837 retry.go:31] will retry after 1.053794311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:41.056387  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.056431  674837 retry.go:31] will retry after 475.710606ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.141589  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:41.204433  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.204468  674837 retry.go:31] will retry after 1.290660505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:41.451870  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:41.533007  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:41.591782  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.591824  674837 retry.go:31] will retry after 1.793211427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:41.955544  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:42.015312  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.015347  674837 retry.go:31] will retry after 1.274198901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.069468  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:42.130804  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.130839  674837 retry.go:31] will retry after 762.247396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.495886  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:42.569630  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.569666  674837 retry.go:31] will retry after 1.252116034s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.893718  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:42.955347  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:42.955386  674837 retry.go:31] will retry after 2.462259291s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.290702  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:43.348777  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.348830  674837 retry.go:31] will retry after 1.606378233s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.385981  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:43.443332  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.443368  674837 retry.go:31] will retry after 2.094940082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:43.452768  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:43.822145  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:43.880051  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:43.880086  674837 retry.go:31] will retry after 1.63512815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:42.722025  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:44.722773  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	I0919 23:25:41.060866  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:41.065935  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:41.065967  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:41.560293  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:41.564900  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:25:41.564935  673615 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:25:42.060580  673615 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:25:42.065948  673615 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0919 23:25:42.066916  673615 api_server.go:141] control plane version: v1.34.0
	I0919 23:25:42.066941  673615 api_server.go:131] duration metric: took 4.506796265s to wait for apiserver health ...
	I0919 23:25:42.066949  673615 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:25:42.070613  673615 system_pods.go:59] 8 kube-system pods found
	I0919 23:25:42.070647  673615 system_pods.go:61] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:42.070658  673615 system_pods.go:61] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:25:42.070696  673615 system_pods.go:61] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:25:42.070705  673615 system_pods.go:61] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:25:42.070713  673615 system_pods.go:61] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:25:42.070721  673615 system_pods.go:61] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:25:42.070737  673615 system_pods.go:61] "metrics-server-746fcd58dc-sptn4" [4decf9fa-5593-4e44-9579-ba7f183d4fed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:25:42.070742  673615 system_pods.go:61] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Running
	I0919 23:25:42.070751  673615 system_pods.go:74] duration metric: took 3.795459ms to wait for pod list to return data ...
	I0919 23:25:42.070760  673615 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:25:42.073120  673615 default_sa.go:45] found service account: "default"
	I0919 23:25:42.073139  673615 default_sa.go:55] duration metric: took 2.372853ms for default service account to be created ...
	I0919 23:25:42.073147  673615 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:25:42.075734  673615 system_pods.go:86] 8 kube-system pods found
	I0919 23:25:42.075759  673615 system_pods.go:89] "coredns-66bc5c9577-4tv82" [e5a76766-119a-4cd1-af31-c849ceca9213] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:42.075766  673615 system_pods.go:89] "etcd-embed-certs-253767" [ba55bd10-b589-43d9-adf4-55878f32c04e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:25:42.075774  673615 system_pods.go:89] "kube-apiserver-embed-certs-253767" [32b772fc-d09a-44a9-9997-70c58ee0403c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:25:42.075782  673615 system_pods.go:89] "kube-controller-manager-embed-certs-253767" [eb963db4-1fed-4ff1-9aca-584c6c9847e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:25:42.075789  673615 system_pods.go:89] "kube-proxy-j4ch4" [3e3fd9d8-5020-4eb0-9cf7-7595838a6ae0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:25:42.075794  673615 system_pods.go:89] "kube-scheduler-embed-certs-253767" [9cea9d81-809e-480b-9d68-b8ae3786cd5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:25:42.075802  673615 system_pods.go:89] "metrics-server-746fcd58dc-sptn4" [4decf9fa-5593-4e44-9579-ba7f183d4fed] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:25:42.075805  673615 system_pods.go:89] "storage-provisioner" [43f91030-3f3a-48e2-9f19-566f9d421975] Running
	I0919 23:25:42.075817  673615 system_pods.go:126] duration metric: took 2.659095ms to wait for k8s-apps to be running ...
	I0919 23:25:42.075826  673615 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:25:42.075863  673615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:25:42.088470  673615 system_svc.go:56] duration metric: took 12.622937ms WaitForService to wait for kubelet
	I0919 23:25:42.088493  673615 kubeadm.go:578] duration metric: took 7.668820061s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:25:42.088563  673615 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:25:42.091827  673615 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:25:42.091853  673615 node_conditions.go:123] node cpu capacity is 8
	I0919 23:25:42.091866  673615 node_conditions.go:105] duration metric: took 3.29892ms to run NodePressure ...
	I0919 23:25:42.091878  673615 start.go:241] waiting for startup goroutines ...
	I0919 23:25:42.091884  673615 start.go:246] waiting for cluster config update ...
	I0919 23:25:42.091900  673615 start.go:255] writing updated cluster config ...
	I0919 23:25:42.092207  673615 ssh_runner.go:195] Run: rm -f paused
	I0919 23:25:42.095921  673615 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:42.099879  673615 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tv82" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:25:44.104844  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:43.693879  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:45.694423  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:47.695639  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:44.955840  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:45.012801  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.012838  674837 retry.go:31] will retry after 3.847878931s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.418368  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:45.482236  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.482273  674837 retry.go:31] will retry after 1.591517849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.515943  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:25:45.538727  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:45.577679  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.577718  674837 retry.go:31] will retry after 4.874202788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:45.601250  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:45.601285  674837 retry.go:31] will retry after 3.880703529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:45.951829  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:47.074084  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:47.138292  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:47.138325  674837 retry.go:31] will retry after 3.906263754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:47.952474  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:48.861736  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:48.930715  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:48.930754  674837 retry.go:31] will retry after 5.934858241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:49.482298  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:49.551667  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:49.551713  674837 retry.go:31] will retry after 4.622892988s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:47.221542  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:49.221765  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:46.105987  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:48.605715  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:50.606208  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:50.194445  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:52.194756  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	W0919 23:25:49.952741  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:50.452954  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:50.519567  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:50.519612  674837 retry.go:31] will retry after 5.244482678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:51.045317  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:51.105296  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:51.105335  674837 retry.go:31] will retry after 8.297441162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:51.952829  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:54.175174  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:25:54.233382  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:54.233429  674837 retry.go:31] will retry after 6.050312194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:54.451892  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:54.866423  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:25:54.922478  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:54.922540  674837 retry.go:31] will retry after 9.107847114s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:51.222298  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:53.721797  666828 pod_ready.go:104] pod "coredns-66bc5c9577-z2rcs" is not "Ready", error: <nil>
	W0919 23:25:53.104877  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:55.104931  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:54.196279  660928 pod_ready.go:104] pod "coredns-5dd5756b68-q75nl" is not "Ready", error: <nil>
	I0919 23:25:54.693762  660928 pod_ready.go:94] pod "coredns-5dd5756b68-q75nl" is "Ready"
	I0919 23:25:54.693791  660928 pod_ready.go:86] duration metric: took 59.504989655s for pod "coredns-5dd5756b68-q75nl" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.696671  660928 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.700420  660928 pod_ready.go:94] pod "etcd-old-k8s-version-359569" is "Ready"
	I0919 23:25:54.700446  660928 pod_ready.go:86] duration metric: took 3.753342ms for pod "etcd-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.702876  660928 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.706864  660928 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-359569" is "Ready"
	I0919 23:25:54.706882  660928 pod_ready.go:86] duration metric: took 3.98047ms for pod "kube-apiserver-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.709313  660928 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:54.893212  660928 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-359569" is "Ready"
	I0919 23:25:54.893246  660928 pod_ready.go:86] duration metric: took 183.913473ms for pod "kube-controller-manager-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:55.093814  660928 pod_ready.go:83] waiting for pod "kube-proxy-hvp2z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:55.492837  660928 pod_ready.go:94] pod "kube-proxy-hvp2z" is "Ready"
	I0919 23:25:55.492867  660928 pod_ready.go:86] duration metric: took 399.028031ms for pod "kube-proxy-hvp2z" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:55.693615  660928 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.092983  660928 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-359569" is "Ready"
	I0919 23:25:56.093016  660928 pod_ready.go:86] duration metric: took 399.373804ms for pod "kube-scheduler-old-k8s-version-359569" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.093032  660928 pod_ready.go:40] duration metric: took 1m0.909722957s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:56.139044  660928 start.go:617] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I0919 23:25:56.140388  660928 out.go:203] 
	W0919 23:25:56.141437  660928 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I0919 23:25:56.142544  660928 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0919 23:25:56.143770  660928 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-359569" cluster and "default" namespace by default
	I0919 23:25:56.222688  666828 pod_ready.go:94] pod "coredns-66bc5c9577-z2rcs" is "Ready"
	I0919 23:25:56.222713  666828 pod_ready.go:86] duration metric: took 37.006414087s for pod "coredns-66bc5c9577-z2rcs" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.225199  666828 pod_ready.go:83] waiting for pod "etcd-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.229161  666828 pod_ready.go:94] pod "etcd-no-preload-834234" is "Ready"
	I0919 23:25:56.229189  666828 pod_ready.go:86] duration metric: took 3.965294ms for pod "etcd-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.231384  666828 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.235209  666828 pod_ready.go:94] pod "kube-apiserver-no-preload-834234" is "Ready"
	I0919 23:25:56.235228  666828 pod_ready.go:86] duration metric: took 3.823926ms for pod "kube-apiserver-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.236996  666828 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.420952  666828 pod_ready.go:94] pod "kube-controller-manager-no-preload-834234" is "Ready"
	I0919 23:25:56.420979  666828 pod_ready.go:86] duration metric: took 183.963069ms for pod "kube-controller-manager-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:56.620959  666828 pod_ready.go:83] waiting for pod "kube-proxy-ljrsp" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.020879  666828 pod_ready.go:94] pod "kube-proxy-ljrsp" is "Ready"
	I0919 23:25:57.020909  666828 pod_ready.go:86] duration metric: took 399.925626ms for pod "kube-proxy-ljrsp" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.221140  666828 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.620967  666828 pod_ready.go:94] pod "kube-scheduler-no-preload-834234" is "Ready"
	I0919 23:25:57.620997  666828 pod_ready.go:86] duration metric: took 399.824833ms for pod "kube-scheduler-no-preload-834234" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:57.621011  666828 pod_ready.go:40] duration metric: took 38.416192153s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:57.669247  666828 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:25:57.671277  666828 out.go:179] * Done! kubectl is now configured to use "no-preload-834234" cluster and "default" namespace by default
	I0919 23:25:55.764272  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:25:55.827173  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:55.827221  674837 retry.go:31] will retry after 6.475736064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:56.452694  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	W0919 23:25:58.951882  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:25:59.403573  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:25:59.462484  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:25:59.462537  674837 retry.go:31] will retry after 6.573954523s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:25:57.105049  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:25:59.105629  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	I0919 23:26:00.284343  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:26:00.342903  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:00.342936  674837 retry.go:31] will retry after 9.28995248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:00.951959  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:02.303221  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0919 23:26:02.361047  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:02.361086  674837 retry.go:31] will retry after 19.573085188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:02.952610  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:04.031187  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0919 23:26:04.103367  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:04.103405  674837 retry.go:31] will retry after 12.611866796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:01.105835  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:03.605633  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:05.451952  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:06.037622  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:26:06.094888  674837 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:26:06.094920  674837 retry.go:31] will retry after 15.716692606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:26:07.452100  674837 node_ready.go:55] error getting node "default-k8s-diff-port-485703" condition "Ready" status (will retry): Get "https://192.168.85.2:8444/api/v1/nodes/default-k8s-diff-port-485703": dial tcp 192.168.85.2:8444: connect: connection refused
	I0919 23:26:09.633151  674837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0919 23:26:06.105406  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:08.105556  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	W0919 23:26:10.105867  673615 pod_ready.go:104] pod "coredns-66bc5c9577-4tv82" is not "Ready", error: <nil>
	I0919 23:26:11.578927  674837 node_ready.go:49] node "default-k8s-diff-port-485703" is "Ready"
	I0919 23:26:11.578982  674837 node_ready.go:38] duration metric: took 32.127722346s for node "default-k8s-diff-port-485703" to be "Ready" ...
	I0919 23:26:11.579007  674837 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:26:11.579150  674837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:26:12.280976  674837 api_server.go:72] duration metric: took 33.060026055s to wait for apiserver process to appear ...
	I0919 23:26:12.281011  674837 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:26:12.281041  674837 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0919 23:26:12.281244  674837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.648035251s)
	I0919 23:26:12.282572  674837 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-485703 addons enable metrics-server
	
	
	
	==> Docker <==
	Sep 19 23:25:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:11.947793574Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 19 23:25:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:11.947824632Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:20 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:20.869034105Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:20 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:20.924145926Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:20 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:20.924268263Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 19 23:25:20 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:25:20Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 19 23:25:23 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:23.470816236Z" level=info msg="ignoring event" container=63c04f0517916ae38fdb13b4b0b8ca78204065a9545643175634b090d4b1324c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.879420971Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.879465215Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.881358553Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 19 23:25:40 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:40.881397959Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:25:41 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:41.952250813Z" level=info msg="ignoring event" container=12c2167f10bd3034771c5245f24d50d5de4222c935494183ccefd849b58749a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 23:25:48 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:48.862327339Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:49 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:49.152868193Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:25:49 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:25:49.153042723Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 19 23:25:49 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:25:49Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 19 23:26:10 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:26:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 19 23:26:11 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:11.987717231Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:12.039980937Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:12.040100509Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 19 23:26:12 old-k8s-version-359569 cri-dockerd[1119]: time="2025-09-19T23:26:12Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 19 23:26:12 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:12.115223631Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:26:12 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:12.115452385Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 19 23:26:12 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:12.117920227Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 19 23:26:12 old-k8s-version-359569 dockerd[812]: time="2025-09-19T23:26:12.118424107Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a2d9e9d5fed8c       07655ddf2eebe                                                                                         32 seconds ago       Running             kubernetes-dashboard      1                   c7548b753bce7       kubernetes-dashboard-8694d4445c-nlr4d
	9fa04b5a65fb2       6e38f40d628db                                                                                         36 seconds ago       Running             storage-provisioner       6                   bcb3f1d156708       storage-provisioner
	4726cd10ee5ef       ea1030da44aa1                                                                                         43 seconds ago       Running             kube-proxy                8                   6ae95751ec3e6       kube-proxy-hvp2z
	12c2167f10bd3       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        About a minute ago   Exited              kubernetes-dashboard      0                   c7548b753bce7       kubernetes-dashboard-8694d4445c-nlr4d
	6f8b342db16bd       ea1030da44aa1                                                                                         About a minute ago   Exited              kube-proxy                7                   6ae95751ec3e6       kube-proxy-hvp2z
	cf3b9fabeb74c       56cc512116c8f                                                                                         About a minute ago   Running             busybox                   1                   4cd1dfbfefb19       busybox
	b987ae99c1864       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   552a145ce96f5       coredns-5dd5756b68-q75nl
	63c04f0517916       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       5                   bcb3f1d156708       storage-provisioner
	037cf7c5d3be1       f6f496300a2ae                                                                                         About a minute ago   Running             kube-scheduler            1                   45ba06c4aa5bb       kube-scheduler-old-k8s-version-359569
	7f75bd54ab8d8       bb5e0dde9054c                                                                                         About a minute ago   Running             kube-apiserver            1                   6de24586c2ec0       kube-apiserver-old-k8s-version-359569
	92f5227dd2bce       4be79c38a4bab                                                                                         About a minute ago   Running             kube-controller-manager   1                   4b51b5626427a       kube-controller-manager-old-k8s-version-359569
	3c553bb4cb66a       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      1                   e8c784e603e43       etcd-old-k8s-version-359569
	24cba55f27ce7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago        Exited              busybox                   0                   93ec1d27ab925       busybox
	07d67388cf6cf       ead0a4a53df89                                                                                         6 minutes ago        Exited              coredns                   0                   161796527f3af       coredns-5dd5756b68-q75nl
	16a6dcf2464a7       bb5e0dde9054c                                                                                         6 minutes ago        Exited              kube-apiserver            0                   59520b69eca50       kube-apiserver-old-k8s-version-359569
	d2da53d03680f       4be79c38a4bab                                                                                         6 minutes ago        Exited              kube-controller-manager   0                   5bdd0c3014438       kube-controller-manager-old-k8s-version-359569
	a6ca7dd11600f       73deb9a3f7025                                                                                         6 minutes ago        Exited              etcd                      0                   0a1d0a4a5e8ac       etcd-old-k8s-version-359569
	dc91f93ea3d06       f6f496300a2ae                                                                                         6 minutes ago        Exited              kube-scheduler            0                   efda4b3258a50       kube-scheduler-old-k8s-version-359569
	
	
	==> coredns [07d67388cf6c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b987ae99c186] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33582 - 30568 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 6.001999861s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:40803->192.168.103.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:57889 - 5513 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 6.001761535s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:48812->192.168.103.1:53: i/o timeout
	[INFO] 127.0.0.1:43992 - 65473 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 4.001933173s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:55087->192.168.103.1:53: i/o timeout
	[INFO] 127.0.0.1:60983 - 51657 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 2.00102617s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:56781->192.168.103.1:53: i/o timeout
	[INFO] 127.0.0.1:40603 - 47036 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 2.00026162s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:53386->192.168.103.1:53: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:45749 - 41213 "HINFO IN 2237157274479912952.9118626531254449427. udp 57 false 512" - - 0 2.000554092s
	[ERROR] plugin/errors: 2 2237157274479912952.9118626531254449427. HINFO: read udp 10.244.0.6:38992->192.168.103.1:53: i/o timeout
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-359569
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-359569
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=old-k8s-version-359569
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_19_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:19:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-359569
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:26:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:19:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 19 Sep 2025 23:26:10 +0000   Fri, 19 Sep 2025 23:26:10 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-359569
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 18f48281b08d485ab8cfd87318391c82
	  System UUID:                5a3ce1f6-0d12-4d86-96a8-fc8a854ce373
	  Boot ID:                    f409d6b2-5b2d-482a-a418-1c1a417dfa0a
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 coredns-5dd5756b68-q75nl                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m7s
	  kube-system                 etcd-old-k8s-version-359569                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m20s
	  kube-system                 kube-apiserver-old-k8s-version-359569             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-old-k8s-version-359569    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-hvp2z                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-scheduler-old-k8s-version-359569             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 metrics-server-57f55c9bc5-rrcl7                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         113s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-ddwj8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-nlr4d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 42s                    kube-proxy       
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s                  kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s                   node-controller  Node old-k8s-version-359569 event: Registered Node old-k8s-version-359569 in Controller
	  Normal  Starting                 85s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x9 over 85s)      kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x7 over 85s)      kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)      kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node old-k8s-version-359569 event: Registered Node old-k8s-version-359569 in Controller
	  Normal  Starting                 5s                     kubelet          Starting kubelet.
	  Normal  Starting                 4s                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                     kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                     kubelet          Node old-k8s-version-359569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                     kubelet          Node old-k8s-version-359569 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4s                     kubelet          Node old-k8s-version-359569 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4s                     kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.005224] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.995125] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.506127] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.500833] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.994986] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.505925] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501603] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.993779] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.507835] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501321] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.990961] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[Sep19 23:26] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501557] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.990813] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.510399] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.500969] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.989916] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.510723] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501805] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.987992] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.513010] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +1.501157] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	[  +0.902088] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 01 6e 96 e1 37 08 06
	[  +0.599571] IPv4: martian destination 127.0.0.11 from 10.244.0.6, dev bridge
	
	
	==> etcd [3c553bb4cb66] <==
	{"level":"info","ts":"2025-09-19T23:24:50.612642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-09-19T23:24:50.612677Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-09-19T23:24:50.612797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-09-19T23:24:50.612991Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:24:50.61358Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:24:50.616455Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T23:24:50.616842Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-19T23:24:50.616921Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-19T23:24:50.617331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:50.617555Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:51.703252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-19T23:24:51.7033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-19T23:24:51.703363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:24:51.703391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.703404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.703417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.703448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-19T23:24:51.704569Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-359569 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-19T23:24:51.704616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:24:51.704694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:24:51.704762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-19T23:24:51.704811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-19T23:24:51.706973Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-19T23:24:51.707057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-19T23:25:49.35754Z","caller":"traceutil/trace.go:171","msg":"trace[286202149] transaction","detail":"{read_only:false; response_revision:874; number_of_response:1; }","duration":"109.642649ms","start":"2025-09-19T23:25:49.247843Z","end":"2025-09-19T23:25:49.357486Z","steps":["trace[286202149] 'process raft request'  (duration: 104.807918ms)"],"step_count":1}
	
	
	==> etcd [a6ca7dd11600] <==
	{"level":"info","ts":"2025-09-19T23:19:49.684921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.68493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.684943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.68496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-19T23:19:49.686009Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.686693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:19:49.686809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:19:49.686978Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-19T23:19:49.687046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-19T23:19:49.687064Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.687283Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.687402Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:19:49.686679Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-359569 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-19T23:19:49.688392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-19T23:19:49.689083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-19T23:24:22.066418Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-19T23:24:22.066594Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-359569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2025-09-19T23:24:22.066743Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T23:24:22.066773Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T23:24:22.068006Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T23:24:22.068132Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-19T23:24:22.088104Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2025-09-19T23:24:22.09009Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:22.090234Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-19T23:24:22.09029Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-359569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> kernel <==
	 23:26:14 up  2:08,  0 users,  load average: 2.02, 2.35, 3.17
	Linux old-k8s-version-359569 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [16a6dcf2464a] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 23:24:23.083175       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 23:24:23.083191       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0919 23:24:23.083180       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7f75bd54ab8d] <==
	W0919 23:24:55.094601       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 23:24:57.740361       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 request timed out
	I0919 23:24:57.740389       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 23:25:02.733379       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I0919 23:25:05.635085       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0919 23:25:05.733823       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:25:05.935777       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 23:25:05.935777       1 controller.go:624] quota admission added evaluator for: endpoints
	E0919 23:25:12.733742       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:25:22.734657       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:25:32.735163       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:25:42.736241       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I0919 23:25:52.632549       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.217.254:443: connect: connection refused
	I0919 23:25:52.632573       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 23:25:52.737506       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	W0919 23:25:53.738180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 23:25:53.738222       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0919 23:25:53.738232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:25:53.739307       1 handler_proxy.go:93] no RequestInfo found in the context
	E0919 23:25:53.739375       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 23:25:53.739393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:26:02.738186       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0919 23:26:12.738484       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [92f5227dd2bc] <==
	I0919 23:25:12.150128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.987144ms"
	I0919 23:25:12.150224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.044µs"
	I0919 23:25:20.809561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="153.307µs"
	I0919 23:25:25.815134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="99.525µs"
	E0919 23:25:35.690476       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 23:25:36.112624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 23:25:36.815681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.298µs"
	I0919 23:25:40.810468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="91.021µs"
	I0919 23:25:42.464354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.766676ms"
	I0919 23:25:42.465273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="167.688µs"
	I0919 23:25:43.480607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.557651ms"
	I0919 23:25:43.480715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.716µs"
	I0919 23:25:48.811342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.998µs"
	I0919 23:25:51.809249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="79.344µs"
	I0919 23:25:54.665373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.940381ms"
	I0919 23:25:54.665554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.949µs"
	I0919 23:26:03.809422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.613µs"
	I0919 23:26:03.818902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="93.248µs"
	E0919 23:26:05.432859       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:26:05.434071       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:26:05.435264       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:26:05.694895       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0919 23:26:06.119962       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 23:26:11.703673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.564µs"
	I0919 23:26:11.737577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.072µs"
	
	
	==> kube-controller-manager [d2da53d03680] <==
	E0919 23:20:56.673061       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.673794       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.675299       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:20:56.676048       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.405139       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:50.405140       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.674370       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.675802       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:21:56.676323       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.406111       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:50.406144       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.674482       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.676638       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:22:56.676640       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.407306       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:50.407318       1 dynamic_cafile_content.go:166] "Failed to watch CA file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.674821       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.676948       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	E0919 23:23:56.676948       1 dynamic_serving_content.go:141] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	I0919 23:24:21.724575       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0919 23:24:21.732070       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-rrcl7"
	I0919 23:24:21.743884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="19.488983ms"
	I0919 23:24:21.756125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="12.181422ms"
	I0919 23:24:21.756238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="67.975µs"
	I0919 23:24:21.757612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="84.823µs"
	
	
	==> kube-proxy [4726cd10ee5e] <==
	I0919 23:25:31.955227       1 server_others.go:69] "Using iptables proxy"
	I0919 23:25:31.966149       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0919 23:25:31.988376       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:25:31.990994       1 server_others.go:152] "Using iptables Proxier"
	I0919 23:25:31.991032       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0919 23:25:31.991040       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0919 23:25:31.991068       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 23:25:31.991351       1 server.go:846] "Version info" version="v1.28.0"
	I0919 23:25:31.991365       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:25:31.992086       1 config.go:97] "Starting endpoint slice config controller"
	I0919 23:25:31.992092       1 config.go:188] "Starting service config controller"
	I0919 23:25:31.992129       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 23:25:31.992133       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 23:25:31.992180       1 config.go:315] "Starting node config controller"
	I0919 23:25:31.992214       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 23:25:32.092941       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0919 23:25:32.092957       1 shared_informer.go:318] Caches are synced for service config
	I0919 23:25:32.092974       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [6f8b342db16b] <==
	E0919 23:25:06.955956       1 run.go:74] "command failed" err="failed complete: too many open files"
	
	
	==> kube-scheduler [037cf7c5d3be] <==
	I0919 23:24:51.417512       1 serving.go:348] Generated self-signed cert in-memory
	W0919 23:24:52.662834       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:24:52.662869       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:24:52.662882       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:24:52.662893       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:24:52.687375       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0919 23:24:52.687404       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:24:52.689700       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:24:52.689752       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 23:24:52.691408       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0919 23:24:52.691594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0919 23:24:52.790100       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dc91f93ea3d0] <==
	W0919 23:19:51.050936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.050957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.050958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:51.050961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 23:19:51.051051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 23:19:51.876106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 23:19:51.876148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0919 23:19:51.951405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 23:19:51.951449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 23:19:51.979303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:51.979350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 23:19:51.979365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.979382       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 23:19:51.990556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:51.990608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:52.017451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 23:19:52.017493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 23:19:52.032120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 23:19:52.032165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 23:19:52.238694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 23:19:52.238739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 23:19:52.240416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 23:19:52.240457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0919 23:19:52.544907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 23:24:22.089209       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 23:26:10 old-k8s-version-359569 kubelet[4622]: I0919 23:26:10.841447    4622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161796527f3afd533a2b4bc534a0cd043e92f1b8775e7f508dc61fe7b5ceed38"
	Sep 19 23:26:10 old-k8s-version-359569 kubelet[4622]: I0919 23:26:10.863924    4622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bdd0c30144387a690b6fe45b9436e6731783b85938e42fc56e12075f27c5266"
	Sep 19 23:26:10 old-k8s-version-359569 kubelet[4622]: E0919 23:26:10.871993    4622 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-old-k8s-version-359569\" already exists" pod="kube-system/kube-controller-manager-old-k8s-version-359569"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.553955    4622 apiserver.go:52] "Watching apiserver"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.571654    4622 topology_manager.go:215] "Topology Admit Handler" podUID="0fafe72c-6f1b-4001-971f-54b044acb1cd" podNamespace="kube-system" podName="coredns-5dd5756b68-q75nl"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.571894    4622 topology_manager.go:215] "Topology Admit Handler" podUID="8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be" podNamespace="kube-system" podName="kube-proxy-hvp2z"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.572702    4622 topology_manager.go:215] "Topology Admit Handler" podUID="ef0a9cd7-6497-4877-8fc6-286067f0db01" podNamespace="kube-system" podName="storage-provisioner"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.576182    4622 topology_manager.go:215] "Topology Admit Handler" podUID="5b59928a-3af7-4037-882a-de2e0f43bd9c" podNamespace="default" podName="busybox"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.576315    4622 topology_manager.go:215] "Topology Admit Handler" podUID="d09f48f6-888a-467e-b82b-d4847477a8ac" podNamespace="kube-system" podName="metrics-server-57f55c9bc5-rrcl7"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.578673    4622 topology_manager.go:215] "Topology Admit Handler" podUID="79f7fb7f-084e-49c8-89ec-4c532a4ccf19" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-nlr4d"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.578904    4622 topology_manager.go:215] "Topology Admit Handler" podUID="22bbd461-d78b-4aa2-8860-9b8628063030" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-ddwj8"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.600690    4622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be-lib-modules\") pod \"kube-proxy-hvp2z\" (UID: \"8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be\") " pod="kube-system/kube-proxy-hvp2z"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.600775    4622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef0a9cd7-6497-4877-8fc6-286067f0db01-tmp\") pod \"storage-provisioner\" (UID: \"ef0a9cd7-6497-4877-8fc6-286067f0db01\") " pod="kube-system/storage-provisioner"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.600816    4622 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be-xtables-lock\") pod \"kube-proxy-hvp2z\" (UID: \"8c7d7ea5-01cf-4f8c-bf01-51f9ad2711be\") " pod="kube-system/kube-proxy-hvp2z"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: I0919 23:26:11.676889    4622 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: E0919 23:26:11.892529    4622 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-old-k8s-version-359569\" already exists" pod="kube-system/kube-controller-manager-old-k8s-version-359569"
	Sep 19 23:26:11 old-k8s-version-359569 kubelet[4622]: E0919 23:26:11.897134    4622 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-old-k8s-version-359569\" already exists" pod="kube-system/kube-scheduler-old-k8s-version-359569"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.043095    4622 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.043220    4622 kuberuntime_image.go:53] "Failed to pull image" err="Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" image="registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.043890    4622 kuberuntime_manager.go:1209] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ng9kz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,Termination
GracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dashboard-metrics-scraper-5f989dc9cf-ddwj8_kubernetes-dashboard(22bbd461-d78b-4aa2-8860-9b8628063030): ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.044539    4622 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-ddwj8" podUID="22bbd461-d78b-4aa2-8860-9b8628063030"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.119744    4622 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.119831    4622 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.120182    4622 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lxnqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rrcl7_kube-system(d09f48f6-888a-467e-b82b-d4847477a8ac): ErrImagePull: Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 19 23:26:12 old-k8s-version-359569 kubelet[4622]: E0919 23:26:12.120280    4622 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rrcl7" podUID="d09f48f6-888a-467e-b82b-d4847477a8ac"
	
	
	==> kubernetes-dashboard [12c2167f10bd] <==
	2025/09/19 23:25:11 Using namespace: kubernetes-dashboard
	2025/09/19 23:25:11 Using in-cluster config to connect to apiserver
	2025/09/19 23:25:11 Using secret token for csrf signing
	2025/09/19 23:25:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:25:11 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00059fae8)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00043c100)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19aba3a?)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:96 +0x1cf
	
	
	==> kubernetes-dashboard [a2d9e9d5fed8] <==
	2025/09/19 23:25:42 Using namespace: kubernetes-dashboard
	2025/09/19 23:25:42 Using in-cluster config to connect to apiserver
	2025/09/19 23:25:42 Using secret token for csrf signing
	2025/09/19 23:25:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:25:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:25:42 Successful initial request to the apiserver, version: v1.28.0
	2025/09/19 23:25:42 Generating JWE encryption key
	2025/09/19 23:25:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:25:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:25:42 Initializing JWE encryption key from synchronized object
	2025/09/19 23:25:42 Creating in-cluster Sidecar client
	2025/09/19 23:25:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:25:42 Serving insecurely on HTTP port: 9090
	2025/09/19 23:26:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:25:42 Starting overwatch
	
	
	==> storage-provisioner [63c04f051791] <==
	I0919 23:24:53.449631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:25:23.454057       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9fa04b5a65fb] <==
	I0919 23:25:38.906056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:25:38.916294       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:25:38.916357       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 23:25:38.928041       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:25:38.928241       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ee19f75f-e3bd-4f8c-a05f-4be3ebc50a28", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-359569_08c6bfbb-08f9-4132-a1a5-178eeec673f8 became leader
	I0919 23:25:38.928308       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-359569_08c6bfbb-08f9-4132-a1a5-178eeec673f8!
	I0919 23:25:39.028710       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-359569_08c6bfbb-08f9-4132-a1a5-178eeec673f8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-359569 -n old-k8s-version-359569
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-359569 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-359569 describe pod metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-359569 describe pod metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8: exit status 1 (67.371846ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rrcl7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-ddwj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-359569 describe pod metrics-server-57f55c9bc5-rrcl7 dashboard-metrics-scraper-5f989dc9cf-ddwj8: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.01s)

                                                
                                    

Test pass (294/334)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.92
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 11.84
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.2
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.06
21 TestBinaryMirror 0.79
22 TestOffline 73.24
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 142.65
29 TestAddons/serial/Volcano 40.66
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.48
35 TestAddons/parallel/Registry 15.86
36 TestAddons/parallel/RegistryCreds 0.57
37 TestAddons/parallel/Ingress 21.34
38 TestAddons/parallel/InspektorGadget 5.21
39 TestAddons/parallel/MetricsServer 5.56
41 TestAddons/parallel/CSI 34.56
42 TestAddons/parallel/Headlamp 17.53
43 TestAddons/parallel/CloudSpanner 6.51
44 TestAddons/parallel/LocalPath 55.52
45 TestAddons/parallel/NvidiaDevicePlugin 6.42
46 TestAddons/parallel/Yakd 10.62
47 TestAddons/parallel/AmdGpuDevicePlugin 5.52
48 TestAddons/StoppedEnableDisable 11.15
49 TestCertOptions 30.21
50 TestCertExpiration 253.37
51 TestDockerFlags 26.48
52 TestForceSystemdFlag 37.85
53 TestForceSystemdEnv 27.59
55 TestKVMDriverInstallOrUpdate 1.4
59 TestErrorSpam/setup 22.03
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.93
62 TestErrorSpam/pause 1.2
63 TestErrorSpam/unpause 1.27
64 TestErrorSpam/stop 10.89
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 68.18
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 48.03
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.25
76 TestFunctional/serial/CacheCmd/cache/add_local 1.43
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 52.92
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 0.97
87 TestFunctional/serial/LogsFileCmd 0.97
88 TestFunctional/serial/InvalidService 3.99
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 10.86
92 TestFunctional/parallel/DryRun 0.37
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.93
98 TestFunctional/parallel/ServiceCmdConnect 11.7
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 44.33
102 TestFunctional/parallel/SSHCmd 0.56
103 TestFunctional/parallel/CpCmd 1.72
104 TestFunctional/parallel/MySQL 21.75
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.61
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
114 TestFunctional/parallel/License 0.49
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.39
120 TestFunctional/parallel/ImageCommands/Setup 2.02
121 TestFunctional/parallel/DockerEnv/bash 1.04
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.21
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.96
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.57
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ServiceCmd/DeployApp 18.17
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
145 TestFunctional/parallel/ProfileCmd/profile_list 0.42
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
147 TestFunctional/parallel/MountCmd/any-port 13.69
148 TestFunctional/parallel/MountCmd/specific-port 1.87
149 TestFunctional/parallel/ServiceCmd/List 0.94
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.74
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
153 TestFunctional/parallel/Version/short 0.06
154 TestFunctional/parallel/Version/components 0.56
155 TestFunctional/parallel/ServiceCmd/Format 0.55
156 TestFunctional/parallel/ServiceCmd/URL 0.52
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/NodeLabels 0.07
178 TestMultiControlPlane/serial/StopCluster 21.73
182 TestImageBuild/serial/Setup 20.8
183 TestImageBuild/serial/NormalBuild 1.1
184 TestImageBuild/serial/BuildWithBuildArg 0.63
185 TestImageBuild/serial/BuildWithDockerIgnore 0.46
186 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.45
190 TestJSONOutput/start/Command 66.79
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.49
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.44
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 10.72
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.19
215 TestKicCustomNetwork/create_custom_network 24.31
216 TestKicCustomNetwork/use_default_bridge_network 21.95
217 TestKicExistingNetwork 25.45
218 TestKicCustomSubnet 23.97
219 TestKicStaticIP 24.04
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 51.59
224 TestMountStart/serial/StartWithMountFirst 8.1
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 8.32
227 TestMountStart/serial/VerifyMountSecond 0.26
228 TestMountStart/serial/DeleteFirst 1.5
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.19
231 TestMountStart/serial/RestartStopped 9.15
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 57.54
236 TestMultiNode/serial/DeployApp2Nodes 54.26
237 TestMultiNode/serial/PingHostFrom2Pods 0.82
238 TestMultiNode/serial/AddNode 13.44
239 TestMultiNode/serial/MultiNodeLabels 0.07
240 TestMultiNode/serial/ProfileList 0.66
241 TestMultiNode/serial/CopyFile 9.59
242 TestMultiNode/serial/StopNode 2.16
243 TestMultiNode/serial/StartAfterStop 9.1
244 TestMultiNode/serial/RestartKeepsNodes 73.26
245 TestMultiNode/serial/DeleteNode 5.3
246 TestMultiNode/serial/StopMultiNode 21.74
247 TestMultiNode/serial/RestartMultiNode 51.37
248 TestMultiNode/serial/ValidateNameConflict 26.08
253 TestPreload 157.02
255 TestScheduledStopUnix 95.73
256 TestSkaffold 81.06
258 TestInsufficientStorage 9.85
259 TestRunningBinaryUpgrade 53.8
261 TestKubernetesUpgrade 365.9
262 TestMissingContainerUpgrade 96.53
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
265 TestNoKubernetes/serial/StartWithK8s 50.29
266 TestNoKubernetes/serial/StartWithStopK8s 17.33
267 TestStoppedBinaryUpgrade/Setup 2.59
268 TestStoppedBinaryUpgrade/Upgrade 68.56
269 TestNoKubernetes/serial/Start 9.46
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
271 TestNoKubernetes/serial/ProfileList 1.25
272 TestNoKubernetes/serial/Stop 4.56
273 TestNoKubernetes/serial/StartNoArgs 11.28
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
295 TestPause/serial/Start 65.13
296 TestNetworkPlugins/group/auto/Start 68.42
297 TestNetworkPlugins/group/kindnet/Start 58.76
298 TestPause/serial/SecondStartNoReconfiguration 87.67
299 TestNetworkPlugins/group/auto/KubeletFlags 0.28
300 TestNetworkPlugins/group/auto/NetCatPod 9.23
301 TestNetworkPlugins/group/auto/DNS 0.15
302 TestNetworkPlugins/group/auto/Localhost 0.13
303 TestNetworkPlugins/group/auto/HairPin 0.14
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
306 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
307 TestNetworkPlugins/group/calico/Start 49.71
308 TestNetworkPlugins/group/kindnet/DNS 0.16
309 TestNetworkPlugins/group/kindnet/Localhost 0.14
310 TestNetworkPlugins/group/kindnet/HairPin 0.13
311 TestNetworkPlugins/group/custom-flannel/Start 82.32
312 TestPause/serial/Pause 0.56
313 TestPause/serial/VerifyStatus 0.35
314 TestPause/serial/Unpause 0.59
315 TestPause/serial/PauseAgain 0.88
316 TestPause/serial/DeletePaused 3.13
317 TestPause/serial/VerifyDeletedResources 0.76
318 TestNetworkPlugins/group/false/Start 78.52
319 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/calico/KubeletFlags 0.29
321 TestNetworkPlugins/group/calico/NetCatPod 9.19
322 TestNetworkPlugins/group/calico/DNS 0.14
323 TestNetworkPlugins/group/calico/Localhost 0.13
324 TestNetworkPlugins/group/calico/HairPin 0.12
325 TestNetworkPlugins/group/enable-default-cni/Start 78.24
326 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
327 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
328 TestNetworkPlugins/group/false/KubeletFlags 0.29
329 TestNetworkPlugins/group/false/NetCatPod 9.21
330 TestNetworkPlugins/group/custom-flannel/DNS 0.15
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
333 TestNetworkPlugins/group/false/DNS 0.15
334 TestNetworkPlugins/group/false/Localhost 0.12
335 TestNetworkPlugins/group/false/HairPin 0.12
336 TestNetworkPlugins/group/flannel/Start 115.05
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.33
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
346 TestNetworkPlugins/group/flannel/ControllerPod 6.01
347 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
348 TestNetworkPlugins/group/flannel/NetCatPod 8.19
349 TestNetworkPlugins/group/flannel/DNS 0.15
350 TestNetworkPlugins/group/flannel/Localhost 0.12
351 TestNetworkPlugins/group/flannel/HairPin 0.14
353 TestStartStop/group/no-preload/serial/FirstStart 214.07
355 TestStartStop/group/embed-certs/serial/FirstStart 99.17
357 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.11
358 TestStartStop/group/old-k8s-version/serial/DeployApp 10.26
359 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.89
360 TestStartStop/group/old-k8s-version/serial/Stop 10.72
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
362 TestStartStop/group/old-k8s-version/serial/SecondStart 83.69
363 TestStartStop/group/no-preload/serial/DeployApp 10.26
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
365 TestStartStop/group/no-preload/serial/Stop 10.72
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
367 TestStartStop/group/no-preload/serial/SecondStart 57.83
368 TestStartStop/group/embed-certs/serial/DeployApp 10.25
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
371 TestStartStop/group/embed-certs/serial/Stop 10.96
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
375 TestStartStop/group/embed-certs/serial/SecondStart 81.58
376 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
377 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 79.07
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
382 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
385 TestStartStop/group/no-preload/serial/Pause 3.15
387 TestStartStop/group/newest-cni/serial/FirstStart 31.15
388 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.75
390 TestStartStop/group/newest-cni/serial/Stop 10.76
391 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
393 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
394 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
396 TestStartStop/group/newest-cni/serial/SecondStart 13.05
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
398 TestStartStop/group/embed-certs/serial/Pause 2.33
399 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
400 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.48
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
404 TestStartStop/group/newest-cni/serial/Pause 2.23
x
+
TestDownloadOnly/v1.28.0/json-events (13.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-873856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-873856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.9158473s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0919 22:14:47.654364  146335 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0919 22:14:47.654479  146335 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-873856
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-873856: exit status 85 (59.131469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-873856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-873856 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:33.778257  146348 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:33.778484  146348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:33.778492  146348 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:33.778505  146348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:33.778719  146348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	W0919 22:14:33.778842  146348 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21594-142711/.minikube/config/config.json: open /home/jenkins/minikube-integration/21594-142711/.minikube/config/config.json: no such file or directory
	I0919 22:14:33.779346  146348 out.go:368] Setting JSON to true
	I0919 22:14:33.780729  146348 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3410,"bootTime":1758316664,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:33.780811  146348 start.go:140] virtualization: kvm guest
	I0919 22:14:33.782825  146348 out.go:99] [download-only-873856] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0919 22:14:33.782928  146348 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 22:14:33.782967  146348 notify.go:220] Checking for updates...
	I0919 22:14:33.784142  146348 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:33.785370  146348 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:33.786590  146348 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:14:33.787719  146348 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:14:33.788782  146348 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:33.790655  146348 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:33.790866  146348 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:33.813685  146348 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:33.813804  146348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:34.130027  146348 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:14:34.119850349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:34.130146  146348 docker.go:318] overlay module found
	I0919 22:14:34.131627  146348 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:34.131653  146348 start.go:304] selected driver: docker
	I0919 22:14:34.131662  146348 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:34.131757  146348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:34.187186  146348 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:14:34.176933582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:34.187371  146348 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:34.187926  146348 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0919 22:14:34.188104  146348 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:34.189620  146348 out.go:171] Using Docker driver with root privileges
	I0919 22:14:34.190738  146348 cni.go:84] Creating CNI manager for ""
	I0919 22:14:34.190816  146348 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 22:14:34.190830  146348 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:34.190909  146348 start.go:348] cluster config:
	{Name:download-only-873856 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-873856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:34.192003  146348 out.go:99] Starting "download-only-873856" primary control-plane node in "download-only-873856" cluster
	I0919 22:14:34.192022  146348 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:14:34.192956  146348 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:34.192980  146348 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0919 22:14:34.193085  146348 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:34.208722  146348 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:34.208907  146348 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:34.208999  146348 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:34.296401  146348 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I0919 22:14:34.296435  146348 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:34.296595  146348 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0919 22:14:34.298127  146348 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0919 22:14:34.298155  146348 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 ...
	I0919 22:14:34.409620  146348 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-873856 host does not exist
	  To start a cluster, run: "minikube start -p download-only-873856"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-873856
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (11.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-569996 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-569996 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.835973003s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (11.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0919 22:14:59.890026  146335 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0919 22:14:59.890084  146335 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-569996
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-569996: exit status 85 (58.405666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-873856 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-873856 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ delete  │ -p download-only-873856                                                                                                                                                       │ download-only-873856 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ -o=json --download-only -p download-only-569996 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-569996 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:48.093604  146728 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:48.093831  146728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:48.093839  146728 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:48.093843  146728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:48.094033  146728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:14:48.094530  146728 out.go:368] Setting JSON to true
	I0919 22:14:48.095275  146728 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3424,"bootTime":1758316664,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:48.095359  146728 start.go:140] virtualization: kvm guest
	I0919 22:14:48.097035  146728 out.go:99] [download-only-569996] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:14:48.097151  146728 notify.go:220] Checking for updates...
	I0919 22:14:48.098417  146728 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:48.099684  146728 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:48.100829  146728 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:14:48.101811  146728 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:14:48.102910  146728 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:48.104864  146728 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:48.105148  146728 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:48.126839  146728 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:48.126925  146728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:48.180668  146728 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:14:48.171084822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:48.180778  146728 docker.go:318] overlay module found
	I0919 22:14:48.182351  146728 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:48.182396  146728 start.go:304] selected driver: docker
	I0919 22:14:48.182405  146728 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:48.182516  146728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:48.234062  146728 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-19 22:14:48.225028397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:48.234262  146728 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:48.234776  146728 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0919 22:14:48.234910  146728 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:48.236582  146728 out.go:171] Using Docker driver with root privileges
	I0919 22:14:48.237650  146728 cni.go:84] Creating CNI manager for ""
	I0919 22:14:48.237722  146728 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 22:14:48.237735  146728 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:48.237801  146728 start.go:348] cluster config:
	{Name:download-only-569996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-569996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:48.238949  146728 out.go:99] Starting "download-only-569996" primary control-plane node in "download-only-569996" cluster
	I0919 22:14:48.238974  146728 cache.go:123] Beginning downloading kic base image for docker with docker
	I0919 22:14:48.240021  146728 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:48.240045  146728 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:14:48.240106  146728 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:48.256051  146728 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:48.256171  146728 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:48.256192  146728 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0919 22:14:48.256200  146728 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0919 22:14:48.256211  146728 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0919 22:14:48.349291  146728 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:14:48.349322  146728 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:48.349929  146728 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0919 22:14:48.351513  146728 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0919 22:14:48.351530  146728 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0919 22:14:48.464888  146728 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4?checksum=md5:994a4de1464928e89c992dfd0a962e35 -> /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0919 22:14:58.037538  146728 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	I0919 22:14:58.037655  146728 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21594-142711/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-569996 host does not exist
	  To start a cluster, run: "minikube start -p download-only-569996"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-569996
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-258653 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-258653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-258653
--- PASS: TestDownloadOnlyKic (1.06s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 22:15:01.594334  146335 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-291376 --alsologtostderr --binary-mirror http://127.0.0.1:46165 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-291376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-291376
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (73.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-824697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-824697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m10.805355034s)
helpers_test.go:175: Cleaning up "offline-docker-824697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-824697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-824697: (2.431772743s)
--- PASS: TestOffline (73.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-810554
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-810554: exit status 85 (56.129925ms)

                                                
                                                
-- stdout --
	* Profile "addons-810554" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-810554"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-810554
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-810554: exit status 85 (55.635131ms)

                                                
                                                
-- stdout --
	* Profile "addons-810554" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-810554"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (142.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-810554 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-810554 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m22.649428128s)
--- PASS: TestAddons/Setup (142.65s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 11.622953ms
addons_test.go:884: volcano-controller stabilized in 11.67966ms
addons_test.go:868: volcano-scheduler stabilized in 11.729546ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-h9474" [7f5311d7-fb7a-45ea-be0f-cfba6822876b] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003117127s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-p9xpj" [3c71f841-3066-42d8-b27d-c0244e5fa33d] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003739696s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-pv4v2" [931ad62f-833b-4172-9cc9-a20ae22aa244] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003465388s
addons_test.go:903: (dbg) Run:  kubectl --context addons-810554 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-810554 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-810554 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [7d951a3c-0eac-44b9-a08b-3c00ea02adc3] Pending
helpers_test.go:352: "test-job-nginx-0" [7d951a3c-0eac-44b9-a08b-3c00ea02adc3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [7d951a3c-0eac-44b9-a08b-3c00ea02adc3] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003753957s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable volcano --alsologtostderr -v=1: (11.316213259s)
--- PASS: TestAddons/serial/Volcano (40.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-810554 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-810554 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-810554 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-810554 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [652d64ee-4302-4a99-a664-6578e7e61c1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [652d64ee-4302-4a99-a664-6578e7e61c1f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003447772s
addons_test.go:694: (dbg) Run:  kubectl --context addons-810554 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-810554 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-810554 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.227219ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-s9w2q" [88acbad9-285a-4a13-af7a-89a3a8779ddd] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003396341s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rvgxt" [1e7e4b2d-1c65-45b5-b8b7-ef0a4f5dedb8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003236852s
addons_test.go:392: (dbg) Run:  kubectl --context addons-810554 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-810554 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-810554 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.153344098s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 ip
2025/09/19 22:18:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.86s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.57s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.114844ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-810554
addons_test.go:332: (dbg) Run:  kubectl --context addons-810554 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-810554 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-810554 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-810554 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2183cc91-3628-4345-8276-0f332b00277f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2183cc91-3628-4345-8276-0f332b00277f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00276415s
I0919 22:18:52.493239  146335 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-810554 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable ingress-dns --alsologtostderr -v=1: (1.283744764s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable ingress --alsologtostderr -v=1: (7.715616213s)
--- PASS: TestAddons/parallel/Ingress (21.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-6njcz" [504f5131-935f-4f25-b22a-8f61b848dc3f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003162082s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.40634ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-q59hd" [51cccba6-4f55-422a-bf24-b7dea50a5193] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003096337s
addons_test.go:463: (dbg) Run:  kubectl --context addons-810554 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.56s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 22:18:30.693536  146335 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 22:18:30.696926  146335 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 22:18:30.696956  146335 kapi.go:107] duration metric: took 3.445271ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.456966ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-810554 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-810554 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d808e5a6-9786-4660-9f7a-9ecf2ccca219] Pending
helpers_test.go:352: "task-pv-pod" [d808e5a6-9786-4660-9f7a-9ecf2ccca219] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d808e5a6-9786-4660-9f7a-9ecf2ccca219] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003837641s
addons_test.go:572: (dbg) Run:  kubectl --context addons-810554 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-810554 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-810554 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-810554 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-810554 delete pod task-pv-pod: (1.301223743s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-810554 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-810554 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-810554 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [9de99b8b-03de-43c9-9f70-d447336594df] Pending
helpers_test.go:352: "task-pv-pod-restore" [9de99b8b-03de-43c9-9f70-d447336594df] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [9de99b8b-03de-43c9-9f70-d447336594df] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003120547s
addons_test.go:614: (dbg) Run:  kubectl --context addons-810554 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-810554 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-810554 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.453888878s)
--- PASS: TestAddons/parallel/CSI (34.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-810554 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-w6cp8" [92d6f8ec-0a7f-4ff2-9380-4ccc05910bd3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-w6cp8" [92d6f8ec-0a7f-4ff2-9380-4ccc05910bd3] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003366723s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable headlamp --alsologtostderr -v=1: (5.830018976s)
--- PASS: TestAddons/parallel/Headlamp (17.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-t9vcn" [1f60c7c4-3107-4e13-9f02-a992690681ab] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003327074s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-810554 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-810554 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [825a0749-ecd1-4f28-9299-8010d7669413] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [825a0749-ecd1-4f28-9299-8010d7669413] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [825a0749-ecd1-4f28-9299-8010d7669413] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003009663s
addons_test.go:967: (dbg) Run:  kubectl --context addons-810554 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 ssh "cat /opt/local-path-provisioner/pvc-b979b536-35c0-46df-a230-e0be515b14b1_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-810554 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-810554 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.644233098s)
--- PASS: TestAddons/parallel/LocalPath (55.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-tskvf" [ae877332-25cf-4ce1-855d-0b3de6443f7d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00436202s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.42s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-7rn22" [a6121100-cb9d-419e-9325-698a98946d63] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004136348s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-810554 addons disable yakd --alsologtostderr -v=1: (5.613520467s)
--- PASS: TestAddons/parallel/Yakd (10.62s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-rbktt" [c7d5a18e-e0fb-406e-b898-e53d4eae2686] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003652828s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-810554 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-810554
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-810554: (10.907977589s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-810554
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-810554
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-810554
--- PASS: TestAddons/StoppedEnableDisable (11.15s)

                                                
                                    
x
+
TestCertOptions (30.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-719726 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0919 23:13:33.466401  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-719726 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.34622905s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-719726 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-719726 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-719726 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-719726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-719726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-719726: (2.207719738s)
--- PASS: TestCertOptions (30.21s)

                                                
                                    
x
+
TestCertExpiration (253.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-073186 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-073186 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.698157181s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-073186 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-073186 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (36.448036251s)
helpers_test.go:175: Cleaning up "cert-expiration-073186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-073186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-073186: (2.226653465s)
--- PASS: TestCertExpiration (253.37s)

                                                
                                    
x
+
TestDockerFlags (26.48s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-494087 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-494087 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (23.725574422s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-494087 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-494087 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-494087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-494087
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-494087: (2.181164437s)
--- PASS: TestDockerFlags (26.48s)

                                                
                                    
x
+
TestForceSystemdFlag (37.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-035806 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-035806 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.150328213s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-035806 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-035806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-035806
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-035806: (2.31199231s)
--- PASS: TestForceSystemdFlag (37.85s)

                                                
                                    
x
+
TestForceSystemdEnv (27.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-869213 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-869213 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.988350334s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-869213 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-869213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-869213
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-869213: (2.2669948s)
--- PASS: TestForceSystemdEnv (27.59s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0919 23:13:46.934568  146335 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 23:13:46.934749  146335 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2558660836/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:13:46.963054  146335 install.go:134] /tmp/TestKVMDriverInstallOrUpdate2558660836/001/docker-machine-driver-kvm2 version is {Version:v1.1.1 Commit:40a1a986a50eac533e396012e35516d3d6c63f36-dirty}
W0919 23:13:46.963123  146335 install.go:61] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0 or later
W0919 23:13:46.963240  146335 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 23:13:46.963295  146335 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2558660836/001/docker-machine-driver-kvm2
I0919 23:13:48.185919  146335 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate2558660836/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:13:48.202475  146335 install.go:134] /tmp/TestKVMDriverInstallOrUpdate2558660836/001/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:1af8bdc072232de4b1fec3b6cc0e8337e118bc83}
--- PASS: TestKVMDriverInstallOrUpdate (1.40s)

                                                
                                    
x
+
TestErrorSpam/setup (22.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-550211 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-550211 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-550211 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-550211 --driver=docker  --container-runtime=docker: (22.027643821s)
--- PASS: TestErrorSpam/setup (22.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (10.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 stop: (10.713373249s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550211 --log_dir /tmp/nospam-550211 stop
--- PASS: TestErrorSpam/stop (10.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21594-142711/.minikube/files/etc/test/nested/copy/146335/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432755 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-432755 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m8.182774386s)
--- PASS: TestFunctional/serial/StartWithProxy (68.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (48.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 22:21:39.328752  146335 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432755 --alsologtostderr -v=8
E0919 22:22:25.091708  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.098143  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.109485  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.130832  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.172144  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.253356  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.414948  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:25.737042  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:26.378343  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-432755 --alsologtostderr -v=8: (48.024885173s)
functional_test.go:678: soft start took 48.025647481s for "functional-432755" cluster.
I0919 22:22:27.354019  146335 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (48.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-432755 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cache add registry.k8s.io/pause:3.1
E0919 22:22:27.660317  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-432755 /tmp/TestFunctionalserialCacheCmdcacheadd_local615588304/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cache add minikube-local-cache-test:functional-432755
E0919 22:22:30.222504  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-432755 cache add minikube-local-cache-test:functional-432755: (1.130236349s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cache delete minikube-local-cache-test:functional-432755
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-432755
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (275.224907ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 kubectl -- --context functional-432755 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-432755 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432755 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 22:22:35.343974  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:45.585658  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:23:06.067437  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-432755 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.922696646s)
functional_test.go:776: restart took 52.922843686s for "functional-432755" cluster.
I0919 22:23:26.093734  146335 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (52.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-432755 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 logs
--- PASS: TestFunctional/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 logs --file /tmp/TestFunctionalserialLogsFileCmd1212288907/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-432755 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-432755
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-432755: exit status 115 (344.028519ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31419 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-432755 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 config get cpus: exit status 14 (69.645898ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 config get cpus: exit status 14 (49.687888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-432755 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-432755 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 199364: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432755 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-432755 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (162.863825ms)

                                                
                                                
-- stdout --
	* [functional-432755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:24:02.125652  198269 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:02.125933  198269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:02.125944  198269 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:02.125952  198269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:02.126192  198269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:02.126671  198269 out.go:368] Setting JSON to false
	I0919 22:24:02.127661  198269 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3978,"bootTime":1758316664,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:02.127773  198269 start.go:140] virtualization: kvm guest
	I0919 22:24:02.129691  198269 out.go:179] * [functional-432755] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:02.130781  198269 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:02.130796  198269 notify.go:220] Checking for updates...
	I0919 22:24:02.133345  198269 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:02.134424  198269 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:02.135453  198269 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:02.136609  198269 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:02.137640  198269 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:02.139015  198269 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:02.139670  198269 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:02.167895  198269 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:02.168072  198269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:02.227869  198269 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-19 22:24:02.217099458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:02.227991  198269 docker.go:318] overlay module found
	I0919 22:24:02.230463  198269 out.go:179] * Using the docker driver based on existing profile
	I0919 22:24:02.232040  198269 start.go:304] selected driver: docker
	I0919 22:24:02.232058  198269 start.go:918] validating driver "docker" against &{Name:functional-432755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-432755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:02.232160  198269 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:02.233894  198269 out.go:203] 
	W0919 22:24:02.234967  198269 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 22:24:02.236716  198269 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432755 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432755 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-432755 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (156.564665ms)

                                                
                                                
-- stdout --
	* [functional-432755] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:24:01.970981  198157 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:01.971071  198157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:01.971079  198157 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:01.971083  198157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:01.971378  198157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:24:01.971906  198157 out.go:368] Setting JSON to false
	I0919 22:24:01.972941  198157 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3978,"bootTime":1758316664,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:01.973036  198157 start.go:140] virtualization: kvm guest
	I0919 22:24:01.976117  198157 out.go:179] * [functional-432755] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:01.977416  198157 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:01.977422  198157 notify.go:220] Checking for updates...
	I0919 22:24:01.979791  198157 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:01.980876  198157 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	I0919 22:24:01.982021  198157 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	I0919 22:24:01.983171  198157 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:01.984271  198157 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:01.985876  198157 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:24:01.986364  198157 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:02.010956  198157 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:24:02.011090  198157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:24:02.066301  198157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:24:02.056551349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:24:02.066461  198157 docker.go:318] overlay module found
	I0919 22:24:02.068018  198157 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0919 22:24:02.069061  198157 start.go:304] selected driver: docker
	I0919 22:24:02.069076  198157 start.go:918] validating driver "docker" against &{Name:functional-432755 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-432755 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:02.069192  198157 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:02.071079  198157 out.go:203] 
	W0919 22:24:02.072290  198157 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:24:02.073340  198157 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-432755 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-432755 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9b8rm" [48bc3d56-88fe-47dc-9651-8ffb9afb3252] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-9b8rm" [48bc3d56-88fe-47dc-9651-8ffb9afb3252] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003664141s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 service hello-node-connect --url
E0919 22:23:47.029717  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32480
functional_test.go:1680: http://192.168.49.2:32480: success! body:
Request served by hello-node-connect-7d85dfc575-9b8rm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32480
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7e365081-a939-49ab-b80e-1805ee14f0ec] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.031854408s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-432755 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-432755 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-432755 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-432755 apply -f testdata/storage-provisioner/pod.yaml
I0919 22:23:39.863381  146335 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [354f000c-fa61-46d6-8ef3-564f6431c14a] Pending
helpers_test.go:352: "sp-pod" [354f000c-fa61-46d6-8ef3-564f6431c14a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [354f000c-fa61-46d6-8ef3-564f6431c14a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003399565s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-432755 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-432755 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-432755 apply -f testdata/storage-provisioner/pod.yaml
I0919 22:24:03.677749  146335 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [abfd8c13-9f91-4408-8048-392341ac3ac6] Pending
helpers_test.go:352: "sp-pod" [abfd8c13-9f91-4408-8048-392341ac3ac6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [abfd8c13-9f91-4408-8048-392341ac3ac6] Running
2025/09/19 22:24:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.002983803s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-432755 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh -n functional-432755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cp functional-432755:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1491440077/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh -n functional-432755 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh -n functional-432755 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-432755 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-zd49m" [a7fc0135-6305-4c66-afc5-3b8525765647] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-zd49m" [a7fc0135-6305-4c66-afc5-3b8525765647] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003983631s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-432755 exec mysql-5bb876957f-zd49m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-432755 exec mysql-5bb876957f-zd49m -- mysql -ppassword -e "show databases;": exit status 1 (164.196036ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 22:23:57.562059  146335 retry.go:31] will retry after 1.347358547s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-432755 exec mysql-5bb876957f-zd49m -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-432755 exec mysql-5bb876957f-zd49m -- mysql -ppassword -e "show databases;": exit status 1 (135.55695ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 22:23:59.045996  146335 retry.go:31] will retry after 1.832134519s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-432755 exec mysql-5bb876957f-zd49m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/146335/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /etc/test/nested/copy/146335/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/146335.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /etc/ssl/certs/146335.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/146335.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /usr/share/ca-certificates/146335.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1463352.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /etc/ssl/certs/1463352.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1463352.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /usr/share/ca-certificates/1463352.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-432755 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh "sudo systemctl is-active crio": exit status 1 (288.200184ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432755 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-432755
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-432755
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432755 image ls --format short --alsologtostderr:
I0919 22:24:06.788004  201123 out.go:360] Setting OutFile to fd 1 ...
I0919 22:24:06.788371  201123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:06.788385  201123 out.go:374] Setting ErrFile to fd 2...
I0919 22:24:06.788391  201123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:06.788657  201123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
I0919 22:24:06.789307  201123 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:06.789421  201123 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:06.789824  201123 cli_runner.go:164] Run: docker container inspect functional-432755 --format={{.State.Status}}
I0919 22:24:06.807009  201123 ssh_runner.go:195] Run: systemctl --version
I0919 22:24:06.807066  201123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432755
I0919 22:24:06.826087  201123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/functional-432755/id_rsa Username:docker}
I0919 22:24:06.920627  201123 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432755 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-432755 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kubernetesui/metrics-scraper      │ <none>            │ 115053965e86b │ 43.8MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/nginx                     │ latest            │ 41f689c209100 │ 192MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/library/mysql                     │ 5.7               │ 5107333e08a87 │ 501MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/minikube-local-cache-test │ functional-432755 │ 6917a206da544 │ 30B    │
│ docker.io/library/nginx                     │ alpine            │ 4a86014ec6994 │ 52.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432755 image ls --format table --alsologtostderr:
I0919 22:24:07.622600  201536 out.go:360] Setting OutFile to fd 1 ...
I0919 22:24:07.622910  201536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.622922  201536 out.go:374] Setting ErrFile to fd 2...
I0919 22:24:07.622927  201536 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.623121  201536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
I0919 22:24:07.623819  201536 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.623958  201536 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.624379  201536 cli_runner.go:164] Run: docker container inspect functional-432755 --format={{.State.Status}}
I0919 22:24:07.644805  201536 ssh_runner.go:195] Run: systemctl --version
I0919 22:24:07.644863  201536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432755
I0919 22:24:07.664375  201536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/functional-432755/id_rsa Username:docker}
I0919 22:24:07.763365  201536 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432755 image ls --format json --alsologtostderr:
[{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c1
04e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6917a206da5449b62a099bd59cdaad89f8450253f58e774d9a72596f27c00b3c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-432755"],"size":"30"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s
.io/pause:latest"],"size":"240000"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-432755","docker.io/kicbase/echo-server:latest"],"size":"4940000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432755 image ls --format json --alsologtostderr:
I0919 22:24:07.412400  201447 out.go:360] Setting OutFile to fd 1 ...
I0919 22:24:07.412528  201447 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.412539  201447 out.go:374] Setting ErrFile to fd 2...
I0919 22:24:07.412545  201447 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.412730  201447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
I0919 22:24:07.413277  201447 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.413385  201447 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.413803  201447 cli_runner.go:164] Run: docker container inspect functional-432755 --format={{.State.Status}}
I0919 22:24:07.432599  201447 ssh_runner.go:195] Run: systemctl --version
I0919 22:24:07.432655  201447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432755
I0919 22:24:07.452080  201447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/functional-432755/id_rsa Username:docker}
I0919 22:24:07.545623  201447 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432755 image ls --format yaml --alsologtostderr:
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-432755
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6917a206da5449b62a099bd59cdaad89f8450253f58e774d9a72596f27c00b3c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-432755
size: "30"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432755 image ls --format yaml --alsologtostderr:
I0919 22:24:07.005219  201251 out.go:360] Setting OutFile to fd 1 ...
I0919 22:24:07.005711  201251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.005726  201251 out.go:374] Setting ErrFile to fd 2...
I0919 22:24:07.005732  201251 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.006212  201251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
I0919 22:24:07.007325  201251 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.007430  201251 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.007913  201251 cli_runner.go:164] Run: docker container inspect functional-432755 --format={{.State.Status}}
I0919 22:24:07.025573  201251 ssh_runner.go:195] Run: systemctl --version
I0919 22:24:07.025640  201251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432755
I0919 22:24:07.042426  201251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/functional-432755/id_rsa Username:docker}
I0919 22:24:07.135332  201251 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh pgrep buildkitd: exit status 1 (253.495301ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image build -t localhost/my-image:functional-432755 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-432755 image build -t localhost/my-image:functional-432755 testdata/build --alsologtostderr: (3.908190894s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432755 image build -t localhost/my-image:functional-432755 testdata/build --alsologtostderr:
I0919 22:24:07.467640  201466 out.go:360] Setting OutFile to fd 1 ...
I0919 22:24:07.467943  201466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.467954  201466 out.go:374] Setting ErrFile to fd 2...
I0919 22:24:07.467960  201466 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:24:07.468176  201466 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
I0919 22:24:07.468818  201466 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.469557  201466 config.go:182] Loaded profile config "functional-432755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0919 22:24:07.470048  201466 cli_runner.go:164] Run: docker container inspect functional-432755 --format={{.State.Status}}
I0919 22:24:07.487576  201466 ssh_runner.go:195] Run: systemctl --version
I0919 22:24:07.487624  201466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432755
I0919 22:24:07.504158  201466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/functional-432755/id_rsa Username:docker}
I0919 22:24:07.595299  201466 build_images.go:161] Building image from path: /tmp/build.2913965508.tar
I0919 22:24:07.595367  201466 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 22:24:07.605327  201466 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2913965508.tar
I0919 22:24:07.608972  201466 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2913965508.tar: stat -c "%s %y" /var/lib/minikube/build/build.2913965508.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2913965508.tar': No such file or directory
I0919 22:24:07.609005  201466 ssh_runner.go:362] scp /tmp/build.2913965508.tar --> /var/lib/minikube/build/build.2913965508.tar (3072 bytes)
I0919 22:24:07.642549  201466 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2913965508
I0919 22:24:07.654845  201466 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2913965508 -xf /var/lib/minikube/build/build.2913965508.tar
I0919 22:24:07.665979  201466 docker.go:361] Building image: /var/lib/minikube/build/build.2913965508
I0919 22:24:07.666041  201466 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-432755 /var/lib/minikube/build/build.2913965508
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e0994fd693f49d1d0c72545ed61dcfe4995e9c882fdf06c0b2410085ea752d78 done
#8 naming to localhost/my-image:functional-432755 done
#8 DONE 0.0s
I0919 22:24:11.298444  201466 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-432755 /var/lib/minikube/build/build.2913965508: (3.632370685s)
I0919 22:24:11.298587  201466 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2913965508
I0919 22:24:11.309600  201466 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2913965508.tar
I0919 22:24:11.321049  201466 build_images.go:217] Built localhost/my-image:functional-432755 from /tmp/build.2913965508.tar
I0919 22:24:11.321088  201466 build_images.go:133] succeeded building to: functional-432755
I0919 22:24:11.321094  201466 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.992429059s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-432755
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-432755 docker-env) && out/minikube-linux-amd64 status -p functional-432755"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-432755 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-432755 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-432755 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-432755 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 193000: os: process already finished
helpers_test.go:525: unable to kill pid 192622: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-432755 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-432755 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-432755 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [414942ea-2002-450b-9000-c3d62c78a954] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [414942ea-2002-450b-9000-c3d62c78a954] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.005052029s
I0919 22:23:44.682147  146335 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image load --daemon kicbase/echo-server:functional-432755 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image load --daemon kicbase/echo-server:functional-432755 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-432755
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image load --daemon kicbase/echo-server:functional-432755 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image save kicbase/echo-server:functional-432755 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image rm kicbase/echo-server:functional-432755 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-432755
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 image save --daemon kicbase/echo-server:functional-432755 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-432755
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-432755 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.93.48 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-432755 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-432755 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-432755 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xs7rb" [b5d9c213-7050-4499-9dc9-22ec60477d35] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-xs7rb" [b5d9c213-7050-4499-9dc9-22ec60477d35] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.004121322s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "357.823968ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.356738ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "356.062997ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.3162ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdany-port3260770597/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758320628779355515" to /tmp/TestFunctionalparallelMountCmdany-port3260770597/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758320628779355515" to /tmp/TestFunctionalparallelMountCmdany-port3260770597/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758320628779355515" to /tmp/TestFunctionalparallelMountCmdany-port3260770597/001/test-1758320628779355515
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.322402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:23:49.077064  146335 retry.go:31] will retry after 323.012576ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 22:23 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 22:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 22:23 test-1758320628779355515
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh cat /mount-9p/test-1758320628779355515
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-432755 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e6af9ca4-9abe-4d8f-994d-0df9af8a03ca] Pending
helpers_test.go:352: "busybox-mount" [e6af9ca4-9abe-4d8f-994d-0df9af8a03ca] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e6af9ca4-9abe-4d8f-994d-0df9af8a03ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e6af9ca4-9abe-4d8f-994d-0df9af8a03ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.003785856s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-432755 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdany-port3260770597/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdspecific-port1557055499/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.556109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:24:02.753620  146335 retry.go:31] will retry after 441.23712ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdspecific-port1557055499/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh "sudo umount -f /mount-9p": exit status 1 (309.648641ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-432755 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdspecific-port1557055499/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-432755 service list -o json: (1.737599301s)
functional_test.go:1504: Took "1.737704383s" to run "out/minikube-linux-amd64 -p functional-432755 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdVerifyCleanup212362738/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdVerifyCleanup212362738/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdVerifyCleanup212362738/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T" /mount1: exit status 1 (311.453424ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:24:04.642517  146335 retry.go:31] will retry after 298.630991ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-432755 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdVerifyCleanup212362738/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdVerifyCleanup212362738/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432755 /tmp/TestFunctionalparallelMountCmdVerifyCleanup212362738/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30710
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-432755 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30710
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-432755
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-432755
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-432755
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-434755 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (21.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 stop --alsologtostderr -v 5
E0919 22:43:33.469317  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-434755 stop --alsologtostderr -v 5: (21.63118344s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-434755 status --alsologtostderr -v 5: exit status 7 (98.843717ms)

                                                
                                                
-- stdout --
	ha-434755
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-434755-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-434755-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:43:39.192794  306708 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:43:39.192889  306708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:39.192898  306708 out.go:374] Setting ErrFile to fd 2...
	I0919 22:43:39.192902  306708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:43:39.193121  306708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 22:43:39.193286  306708 out.go:368] Setting JSON to false
	I0919 22:43:39.193306  306708 mustload.go:65] Loading cluster: ha-434755
	I0919 22:43:39.193389  306708 notify.go:220] Checking for updates...
	I0919 22:43:39.193664  306708 config.go:182] Loaded profile config "ha-434755": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 22:43:39.193690  306708 status.go:174] checking status of ha-434755 ...
	I0919 22:43:39.194144  306708 cli_runner.go:164] Run: docker container inspect ha-434755 --format={{.State.Status}}
	I0919 22:43:39.212825  306708 status.go:371] ha-434755 host status = "Stopped" (err=<nil>)
	I0919 22:43:39.212844  306708 status.go:384] host is not running, skipping remaining checks
	I0919 22:43:39.212850  306708 status.go:176] ha-434755 status: &{Name:ha-434755 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:43:39.212870  306708 status.go:174] checking status of ha-434755-m02 ...
	I0919 22:43:39.213083  306708 cli_runner.go:164] Run: docker container inspect ha-434755-m02 --format={{.State.Status}}
	I0919 22:43:39.229069  306708 status.go:371] ha-434755-m02 host status = "Stopped" (err=<nil>)
	I0919 22:43:39.229106  306708 status.go:384] host is not running, skipping remaining checks
	I0919 22:43:39.229114  306708 status.go:176] ha-434755-m02 status: &{Name:ha-434755-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:43:39.229149  306708 status.go:174] checking status of ha-434755-m04 ...
	I0919 22:43:39.229438  306708 cli_runner.go:164] Run: docker container inspect ha-434755-m04 --format={{.State.Status}}
	I0919 22:43:39.245256  306708 status.go:371] ha-434755-m04 host status = "Stopped" (err=<nil>)
	I0919 22:43:39.245272  306708 status.go:384] host is not running, skipping remaining checks
	I0919 22:43:39.245277  306708 status.go:176] ha-434755-m04 status: &{Name:ha-434755-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (21.73s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-158366 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-158366 --driver=docker  --container-runtime=docker: (20.8045232s)
--- PASS: TestImageBuild/serial/Setup (20.80s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-158366
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-158366: (1.102700611s)
--- PASS: TestImageBuild/serial/NormalBuild (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-158366
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-158366
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-158366
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.45s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-042065 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E0919 22:55:28.161701  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-042065 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m6.784477176s)
--- PASS: TestJSONOutput/start/Command (66.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-042065 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-042065 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-042065 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-042065 --output=json --user=testUser: (10.719828292s)
--- PASS: TestJSONOutput/stop/Command (10.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-146550 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-146550 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (61.389613ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"42df9132-82ff-4804-9aea-4d756deb0525","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-146550] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a5fcdc1-29f1-4358-8e3a-c0efdb099358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"ac870e54-d2ef-4315-9824-a8c67d9e36ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7e09ca41-0dba-4867-8083-15a74a46ce57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig"}}
	{"specversion":"1.0","id":"4ba0b8fa-a65d-4751-b2ae-9a85111994f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube"}}
	{"specversion":"1.0","id":"bd14961f-a64c-47fb-b9a0-bc6b5bd2e488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ec7ab061-17cd-4fc4-9177-4fab63efc311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02c5cbc7-5f08-4bb9-9cf0-d0555d0861de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-146550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-146550
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-988832 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-988832 --network=: (22.231264901s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-988832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-988832
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-988832: (2.055453266s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-584311 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-584311 --network=bridge: (20.027666845s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-584311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-584311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-584311: (1.906484409s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.95s)

                                                
                                    
x
+
TestKicExistingNetwork (25.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 22:57:02.636911  146335 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 22:57:02.653150  146335 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 22:57:02.653235  146335 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 22:57:02.653263  146335 cli_runner.go:164] Run: docker network inspect existing-network
W0919 22:57:02.668837  146335 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 22:57:02.668863  146335 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 22:57:02.668885  146335 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 22:57:02.668992  146335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 22:57:02.684969  146335 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-db7021220859 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:86:a3:92:23:56:8a} reservation:<nil>}
I0919 22:57:02.685341  146335 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00208ede0}
I0919 22:57:02.685385  146335 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 22:57:02.685448  146335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 22:57:02.738593  146335 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-616326 --network=existing-network
E0919 22:57:25.091663  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-616326 --network=existing-network: (23.406317175s)
helpers_test.go:175: Cleaning up "existing-network-616326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-616326
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-616326: (1.910659028s)
I0919 22:57:28.071661  146335 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.45s)

                                                
                                    
x
+
TestKicCustomSubnet (23.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-363739 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-363739 --subnet=192.168.60.0/24: (21.86644483s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-363739 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-363739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-363739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-363739: (2.077739494s)
--- PASS: TestKicCustomSubnet (23.97s)

                                                
                                    
x
+
TestKicStaticIP (24.04s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-847948 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-847948 --static-ip=192.168.200.200: (21.834649162s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-847948 ip
helpers_test.go:175: Cleaning up "static-ip-847948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-847948
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-847948: (2.071003801s)
--- PASS: TestKicStaticIP (24.04s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-704500 --driver=docker  --container-runtime=docker
E0919 22:58:33.473694  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-704500 --driver=docker  --container-runtime=docker: (23.345128103s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-715559 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-715559 --driver=docker  --container-runtime=docker: (22.742643103s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-704500
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-715559
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-715559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-715559
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-715559: (2.132463683s)
helpers_test.go:175: Cleaning up "first-704500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-704500
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-704500: (2.145745529s)
--- PASS: TestMinikubeProfile (51.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-249245 --memory=3072 --mount-string /tmp/TestMountStartserial1788356416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-249245 --memory=3072 --mount-string /tmp/TestMountStartserial1788356416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.094839997s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-249245 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-271497 --memory=3072 --mount-string /tmp/TestMountStartserial1788356416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-271497 --memory=3072 --mount-string /tmp/TestMountStartserial1788356416/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.322897121s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-249245 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-249245 --alsologtostderr -v=5: (1.501207379s)
--- PASS: TestMountStart/serial/DeleteFirst (1.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-271497
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-271497: (1.187569693s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-271497
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-271497: (8.153466105s)
--- PASS: TestMountStart/serial/RestartStopped (9.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-271497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (57.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-390770 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-390770 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (57.067015876s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (57.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (54.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-390770 -- rollout status deployment/busybox: (3.384871611s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:39.921672  146335 retry.go:31] will retry after 802.732006ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:40.845984  146335 retry.go:31] will retry after 1.761790199s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:42.742192  146335 retry.go:31] will retry after 3.101028828s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:45.962715  146335 retry.go:31] will retry after 3.906082382s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:49.991442  146335 retry.go:31] will retry after 4.638056152s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:54.749796  146335 retry.go:31] will retry after 5.048554894s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:00:59.914938  146335 retry.go:31] will retry after 7.125377467s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0919 23:01:07.159114  146335 retry.go:31] will retry after 21.955360964s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-pqcbp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-qgszw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-pqcbp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-qgszw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-pqcbp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-qgszw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (54.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-pqcbp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-pqcbp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-qgszw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-390770 -- exec busybox-7b57f96db7-qgszw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (13.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-390770 -v=5 --alsologtostderr
E0919 23:01:36.535003  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-390770 -v=5 --alsologtostderr: (12.830027672s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (13.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-390770 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp testdata/cp-test.txt multinode-390770:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1596286742/001/cp-test_multinode-390770.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770:/home/docker/cp-test.txt multinode-390770-m02:/home/docker/cp-test_multinode-390770_multinode-390770-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m02 "sudo cat /home/docker/cp-test_multinode-390770_multinode-390770-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770:/home/docker/cp-test.txt multinode-390770-m03:/home/docker/cp-test_multinode-390770_multinode-390770-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m03 "sudo cat /home/docker/cp-test_multinode-390770_multinode-390770-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp testdata/cp-test.txt multinode-390770-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1596286742/001/cp-test_multinode-390770-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770-m02:/home/docker/cp-test.txt multinode-390770:/home/docker/cp-test_multinode-390770-m02_multinode-390770.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770 "sudo cat /home/docker/cp-test_multinode-390770-m02_multinode-390770.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770-m02:/home/docker/cp-test.txt multinode-390770-m03:/home/docker/cp-test_multinode-390770-m02_multinode-390770-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m03 "sudo cat /home/docker/cp-test_multinode-390770-m02_multinode-390770-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp testdata/cp-test.txt multinode-390770-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1596286742/001/cp-test_multinode-390770-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770-m03:/home/docker/cp-test.txt multinode-390770:/home/docker/cp-test_multinode-390770-m03_multinode-390770.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770 "sudo cat /home/docker/cp-test_multinode-390770-m03_multinode-390770.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 cp multinode-390770-m03:/home/docker/cp-test.txt multinode-390770-m02:/home/docker/cp-test_multinode-390770-m03_multinode-390770-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 ssh -n multinode-390770-m02 "sudo cat /home/docker/cp-test_multinode-390770-m03_multinode-390770-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-390770 node stop m03: (1.213737192s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-390770 status: exit status 7 (475.489838ms)

                                                
                                                
-- stdout --
	multinode-390770
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-390770-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-390770-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr: exit status 7 (470.142117ms)

                                                
                                                
-- stdout --
	multinode-390770
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-390770-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-390770-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:01:56.803249  389765 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:01:56.803490  389765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:01:56.803511  389765 out.go:374] Setting ErrFile to fd 2...
	I0919 23:01:56.803515  389765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:01:56.803691  389765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:01:56.803859  389765 out.go:368] Setting JSON to false
	I0919 23:01:56.803880  389765 mustload.go:65] Loading cluster: multinode-390770
	I0919 23:01:56.803928  389765 notify.go:220] Checking for updates...
	I0919 23:01:56.804293  389765 config.go:182] Loaded profile config "multinode-390770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:01:56.804317  389765 status.go:174] checking status of multinode-390770 ...
	I0919 23:01:56.804724  389765 cli_runner.go:164] Run: docker container inspect multinode-390770 --format={{.State.Status}}
	I0919 23:01:56.821773  389765 status.go:371] multinode-390770 host status = "Running" (err=<nil>)
	I0919 23:01:56.821802  389765 host.go:66] Checking if "multinode-390770" exists ...
	I0919 23:01:56.822102  389765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-390770
	I0919 23:01:56.838971  389765 host.go:66] Checking if "multinode-390770" exists ...
	I0919 23:01:56.839240  389765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:01:56.839309  389765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-390770
	I0919 23:01:56.856328  389765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/multinode-390770/id_rsa Username:docker}
	I0919 23:01:56.948598  389765 ssh_runner.go:195] Run: systemctl --version
	I0919 23:01:56.953161  389765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:01:56.964597  389765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:01:57.022838  389765 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 23:01:57.012398236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:01:57.023409  389765 kubeconfig.go:125] found "multinode-390770" server: "https://192.168.67.2:8443"
	I0919 23:01:57.023440  389765 api_server.go:166] Checking apiserver status ...
	I0919 23:01:57.023477  389765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:01:57.035459  389765 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2223/cgroup
	W0919 23:01:57.044620  389765 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2223/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:01:57.044662  389765 ssh_runner.go:195] Run: ls
	I0919 23:01:57.047850  389765 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 23:01:57.052017  389765 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 23:01:57.052043  389765 status.go:463] multinode-390770 apiserver status = Running (err=<nil>)
	I0919 23:01:57.052063  389765 status.go:176] multinode-390770 status: &{Name:multinode-390770 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:01:57.052079  389765 status.go:174] checking status of multinode-390770-m02 ...
	I0919 23:01:57.052362  389765 cli_runner.go:164] Run: docker container inspect multinode-390770-m02 --format={{.State.Status}}
	I0919 23:01:57.068811  389765 status.go:371] multinode-390770-m02 host status = "Running" (err=<nil>)
	I0919 23:01:57.068831  389765 host.go:66] Checking if "multinode-390770-m02" exists ...
	I0919 23:01:57.069077  389765 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-390770-m02
	I0919 23:01:57.086131  389765 host.go:66] Checking if "multinode-390770-m02" exists ...
	I0919 23:01:57.086374  389765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:01:57.086417  389765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-390770-m02
	I0919 23:01:57.103634  389765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32920 SSHKeyPath:/home/jenkins/minikube-integration/21594-142711/.minikube/machines/multinode-390770-m02/id_rsa Username:docker}
	I0919 23:01:57.195562  389765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:01:57.207278  389765 status.go:176] multinode-390770-m02 status: &{Name:multinode-390770-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:01:57.207317  389765 status.go:174] checking status of multinode-390770-m03 ...
	I0919 23:01:57.207619  389765 cli_runner.go:164] Run: docker container inspect multinode-390770-m03 --format={{.State.Status}}
	I0919 23:01:57.224831  389765 status.go:371] multinode-390770-m03 host status = "Stopped" (err=<nil>)
	I0919 23:01:57.224852  389765 status.go:384] host is not running, skipping remaining checks
	I0919 23:01:57.224860  389765 status.go:176] multinode-390770-m03 status: &{Name:multinode-390770-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-390770 node start m03 -v=5 --alsologtostderr: (8.417336079s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-390770
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-390770
E0919 23:02:25.092521  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-390770: (22.602577754s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-390770 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-390770 --wait=true -v=5 --alsologtostderr: (50.552372827s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-390770
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-390770 node delete m03: (4.689111634s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 stop
E0919 23:03:33.472702  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-390770 stop: (21.555492078s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-390770 status: exit status 7 (89.785119ms)

                                                
                                                
-- stdout --
	multinode-390770
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-390770-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr: exit status 7 (91.996469ms)

                                                
                                                
-- stdout --
	multinode-390770
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-390770-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:03:46.583370  404117 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:03:46.583696  404117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:03:46.583707  404117 out.go:374] Setting ErrFile to fd 2...
	I0919 23:03:46.583712  404117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:03:46.583922  404117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-142711/.minikube/bin
	I0919 23:03:46.584104  404117 out.go:368] Setting JSON to false
	I0919 23:03:46.584126  404117 mustload.go:65] Loading cluster: multinode-390770
	I0919 23:03:46.584298  404117 notify.go:220] Checking for updates...
	I0919 23:03:46.584672  404117 config.go:182] Loaded profile config "multinode-390770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0919 23:03:46.584708  404117 status.go:174] checking status of multinode-390770 ...
	I0919 23:03:46.585378  404117 cli_runner.go:164] Run: docker container inspect multinode-390770 --format={{.State.Status}}
	I0919 23:03:46.604055  404117 status.go:371] multinode-390770 host status = "Stopped" (err=<nil>)
	I0919 23:03:46.604082  404117 status.go:384] host is not running, skipping remaining checks
	I0919 23:03:46.604090  404117 status.go:176] multinode-390770 status: &{Name:multinode-390770 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:03:46.604115  404117 status.go:174] checking status of multinode-390770-m02 ...
	I0919 23:03:46.604379  404117 cli_runner.go:164] Run: docker container inspect multinode-390770-m02 --format={{.State.Status}}
	I0919 23:03:46.622821  404117 status.go:371] multinode-390770-m02 host status = "Stopped" (err=<nil>)
	I0919 23:03:46.622856  404117 status.go:384] host is not running, skipping remaining checks
	I0919 23:03:46.622865  404117 status.go:176] multinode-390770-m02 status: &{Name:multinode-390770-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-390770 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-390770 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (50.777180807s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-390770 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-390770
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-390770-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-390770-m02 --driver=docker  --container-runtime=docker: exit status 14 (64.718156ms)

                                                
                                                
-- stdout --
	* [multinode-390770-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-390770-m02' is duplicated with machine name 'multinode-390770-m02' in profile 'multinode-390770'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-390770-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-390770-m03 --driver=docker  --container-runtime=docker: (23.501822273s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-390770
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-390770: exit status 80 (296.143019ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-390770 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-390770-m03 already exists in multinode-390770-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-390770-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-390770-m03: (2.170775425s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.08s)

                                                
                                    
x
+
TestPreload (157.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-551181 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-551181 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m23.60796416s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-551181 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-551181 image pull gcr.io/k8s-minikube/busybox: (2.226773976s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-551181
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-551181: (10.762261009s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-551181 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0919 23:07:25.092179  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-551181 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (57.967335615s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-551181 image list
helpers_test.go:175: Cleaning up "test-preload-551181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-551181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-551181: (2.239257649s)
--- PASS: TestPreload (157.02s)

                                                
                                    
x
+
TestScheduledStopUnix (95.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-137930 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-137930 --memory=3072 --driver=docker  --container-runtime=docker: (22.593442005s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137930 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-137930 -n scheduled-stop-137930
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137930 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 23:08:08.138291  146335 retry.go:31] will retry after 143.386µs: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.139483  146335 retry.go:31] will retry after 85.973µs: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.140664  146335 retry.go:31] will retry after 121.956µs: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.141837  146335 retry.go:31] will retry after 404.97µs: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.142986  146335 retry.go:31] will retry after 381.543µs: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.144162  146335 retry.go:31] will retry after 735.817µs: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.145310  146335 retry.go:31] will retry after 1.466667ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.147588  146335 retry.go:31] will retry after 1.696723ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.149788  146335 retry.go:31] will retry after 1.620748ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.151997  146335 retry.go:31] will retry after 5.522788ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.158326  146335 retry.go:31] will retry after 6.895034ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.165581  146335 retry.go:31] will retry after 7.674905ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.173450  146335 retry.go:31] will retry after 10.585719ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.184731  146335 retry.go:31] will retry after 25.173634ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.211017  146335 retry.go:31] will retry after 18.442073ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
I0919 23:08:08.230267  146335 retry.go:31] will retry after 60.839423ms: open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/scheduled-stop-137930/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137930 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-137930 -n scheduled-stop-137930
E0919 23:08:33.466120  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-137930
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-137930 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-137930
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-137930: exit status 7 (73.15981ms)

                                                
                                                
-- stdout --
	scheduled-stop-137930
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-137930 -n scheduled-stop-137930
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-137930 -n scheduled-stop-137930: exit status 7 (69.663589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-137930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-137930
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-137930: (1.678532327s)
--- PASS: TestScheduledStopUnix (95.73s)

                                                
                                    
x
+
TestSkaffold (81.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1614328086 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-851107 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-851107 --memory=3072 --driver=docker  --container-runtime=docker: (22.679491473s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1614328086 run --minikube-profile skaffold-851107 --kube-context skaffold-851107 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1614328086 run --minikube-profile skaffold-851107 --kube-context skaffold-851107 --status-check=true --port-forward=false --interactive=false: (40.892020554s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-67d5f8874f-ssm8k" [50ccb526-1adc-4cb1-b899-5b1a457a1c07] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004386014s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-5849966677-mvnds" [0a6a30c4-c5df-4e2c-83f4-1a05738c15b9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003146915s
helpers_test.go:175: Cleaning up "skaffold-851107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-851107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-851107: (3.210213126s)
--- PASS: TestSkaffold (81.06s)

                                                
                                    
x
+
TestInsufficientStorage (9.85s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-416524 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-416524 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.583430688s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f7e6996-1b70-4ab0-ad87-48b3832a44b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-416524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e6cbf91-7133-4ae0-add8-fa15aa80d0e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"b24520cb-446b-4cc2-8def-182accb72eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6d68293f-f7ee-48b0-8f01-a75f73d9cb19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig"}}
	{"specversion":"1.0","id":"d939a32e-0848-4063-82f8-bc744a637006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube"}}
	{"specversion":"1.0","id":"7a2e6200-cfd1-4b79-8c14-6b544054c3dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0da775ba-b828-4b15-aa55-168d5103ee2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"93162551-f2a0-48ed-b35b-eb2a4ca0579f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"68a81dba-fec8-4962-b686-0f4d91ab84fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1d10056d-e684-4943-b853-ce6bf4015089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa5f9285-fb26-4524-b870-2df182fa7bb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5688971a-788c-4ad9-b4af-aaac50ccf18e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-416524\" primary control-plane node in \"insufficient-storage-416524\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfcf8565-463d-46a5-9b23-af75e252d4a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fb62c8c-546c-4008-8e0c-db9e7cec0ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9600a2c-a8d5-4345-b12a-ec3dd2468913","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-416524 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-416524 --output=json --layout=cluster: exit status 7 (275.957669ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-416524","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-416524","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:10:49.754918  442530 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-416524" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-416524 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-416524 --output=json --layout=cluster: exit status 7 (277.153252ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-416524","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-416524","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:10:50.032728  442634 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-416524" does not appear in /home/jenkins/minikube-integration/21594-142711/kubeconfig
	E0919 23:10:50.043994  442634 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/insufficient-storage-416524/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-416524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-416524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-416524: (1.709510195s)
--- PASS: TestInsufficientStorage (9.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3604918593 start -p running-upgrade-298651 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3604918593 start -p running-upgrade-298651 --memory=3072 --vm-driver=docker  --container-runtime=docker: (23.603151032s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-298651 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-298651 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.867308276s)
helpers_test.go:175: Cleaning up "running-upgrade-298651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-298651
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-298651: (2.256613212s)
--- PASS: TestRunningBinaryUpgrade (53.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (365.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.762888499s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-122916
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-122916: (10.76091361s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-122916 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-122916 status --format={{.Host}}: exit status 7 (83.182235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m46.642125836s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-122916 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (93.435678ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122916] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-122916
	    minikube start -p kubernetes-upgrade-122916 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1229162 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-122916 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-122916 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.890285772s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-122916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-122916
E0919 23:18:33.468427  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-122916: (2.582750022s)
--- PASS: TestKubernetesUpgrade (365.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (96.53s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.251305907 start -p missing-upgrade-360071 --memory=3072 --driver=docker  --container-runtime=docker
E0919 23:12:08.164007  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.251305907 start -p missing-upgrade-360071 --memory=3072 --driver=docker  --container-runtime=docker: (41.945042345s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-360071
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-360071: (10.428257437s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-360071
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-360071 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-360071 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.851361166s)
helpers_test.go:175: Cleaning up "missing-upgrade-360071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-360071
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-360071: (2.226225973s)
--- PASS: TestMissingContainerUpgrade (96.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860493 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-860493 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (88.303502ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-860493] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-142711/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-142711/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860493 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860493 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (49.96531307s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-860493 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860493 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860493 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (15.257872757s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-860493 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-860493 status -o json: exit status 2 (302.257744ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-860493","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-860493
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-860493: (1.770692842s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1424933387 start -p stopped-upgrade-930104 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1424933387 start -p stopped-upgrade-930104 --memory=3072 --vm-driver=docker  --container-runtime=docker: (50.223457055s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1424933387 -p stopped-upgrade-930104 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1424933387 -p stopped-upgrade-930104 stop: (1.918939537s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-930104 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-930104 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.416965926s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860493 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860493 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (9.461932021s)
--- PASS: TestNoKubernetes/serial/Start (9.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-860493 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-860493 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.498353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-860493
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-860493: (4.559208385s)
--- PASS: TestNoKubernetes/serial/Stop (4.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (11.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-860493 --driver=docker  --container-runtime=docker
E0919 23:12:25.092388  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-860493 --driver=docker  --container-runtime=docker: (11.277751857s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (11.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-860493 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-860493 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.635966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-930104
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestPause/serial/Start (65.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-606422 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-606422 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m5.130037482s)
--- PASS: TestPause/serial/Start (65.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m8.419141322s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (58.758330065s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-606422 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0919 23:15:27.673704  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:27.680162  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:27.691613  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:27.713042  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:27.754490  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:27.836023  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:27.997670  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:28.320006  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:28.962119  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:30.243910  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:32.805731  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:37.927748  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:15:48.169620  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-606422 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m27.648757818s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (87.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-361266 "pgrep -a kubelet"
I0919 23:15:50.707463  146335 config.go:182] Loaded profile config "auto-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sw6pg" [1fef7055-80b1-4158-9882-402a7d6f2e65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sw6pg" [1fef7055-80b1-4158-9882-402a7d6f2e65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003909633s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-g2gxk" [0a25c853-3420-4942-9afc-a2cc7e8456e3] Running
E0919 23:16:08.651302  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004382467s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-361266 "pgrep -a kubelet"
I0919 23:16:10.197419  146335 config.go:182] Loaded profile config "kindnet-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-smzrx" [8e702dfe-e519-45ad-af89-0c6f8a7d4a6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-smzrx" [8e702dfe-e519-45ad-af89-0c6f8a7d4a6c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003790523s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (49.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (49.705490239s)
--- PASS: TestNetworkPlugins/group/calico/Start (49.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (82.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m22.317035509s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (82.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-606422 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-606422 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-606422 --output=json --layout=cluster: exit status 2 (352.999655ms)

                                                
                                                
-- stdout --
	{"Name":"pause-606422","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-606422","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-606422 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-606422 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-606422 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-606422 --alsologtostderr -v=5: (3.128534584s)
--- PASS: TestPause/serial/DeletePaused (3.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-606422
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-606422: exit status 1 (18.659038ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-606422: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (78.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0919 23:16:49.612608  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m18.514953662s)
--- PASS: TestNetworkPlugins/group/false/Start (78.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hrk8q" [4cad975b-e956-42bb-b83b-b7c7c94181d3] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003984936s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-361266 "pgrep -a kubelet"
I0919 23:17:15.405994  146335 config.go:182] Loaded profile config "calico-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r9f9v" [2a222c3e-df58-47fd-9fc8-6dab353a62f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r9f9v" [2a222c3e-df58-47fd-9fc8-6dab353a62f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003325075s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m18.241751357s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-361266 "pgrep -a kubelet"
I0919 23:18:02.017121  146335 config.go:182] Loaded profile config "custom-flannel-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-brtcj" [9175acb3-af33-4a3d-bfb7-87a055fdfe06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-brtcj" [9175acb3-af33-4a3d-bfb7-87a055fdfe06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004313058s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-361266 "pgrep -a kubelet"
I0919 23:18:06.297978  146335 config.go:182] Loaded profile config "false-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h99nf" [1f6dfa33-5622-42fe-b52f-c7cf1c9574d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h99nf" [1f6dfa33-5622-42fe-b52f-c7cf1c9574d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003500641s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0919 23:18:11.534316  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (115.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-361266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m55.052187319s)
--- PASS: TestNetworkPlugins/group/flannel/Start (115.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-361266 "pgrep -a kubelet"
I0919 23:19:04.143150  146335 config.go:182] Loaded profile config "enable-default-cni-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kcn6x" [8e3db586-8812-4db0-96a6-abccb5fbca2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kcn6x" [8e3db586-8812-4db0-96a6-abccb5fbca2c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00405254s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nx52g" [22a041f5-2894-420f-8674-c55c3b7d6c17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004179941s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-361266 "pgrep -a kubelet"
I0919 23:20:35.403674  146335 config.go:182] Loaded profile config "flannel-361266": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-361266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cww9x" [1b74087f-06ba-4076-b3a0-7173c87f4877] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cww9x" [1b74087f-06ba-4076-b3a0-7173c87f4877] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004171833s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-361266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-361266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (214.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-834234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0919 23:21:04.220189  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:04.542540  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:05.184682  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:06.466308  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:09.027758  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:11.424925  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/auto-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:14.150005  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:24.391328  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:31.906580  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/auto-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:21:44.872722  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.109318  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.115767  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.127259  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.148701  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.190146  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.271627  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.433088  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:09.754404  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:10.396768  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:11.678750  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:12.868143  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/auto-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:14.240566  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:19.362741  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:25.091163  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/addons-810554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:25.834751  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:29.604749  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:22:50.086127  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.209421  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.215803  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.227232  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.248607  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.290030  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.371542  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.533065  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:02.854699  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:03.496583  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:04.778121  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.490806  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.497202  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.508587  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.529931  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.571359  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.652851  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:06.814418  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:07.136136  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:07.339695  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:07.777622  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-834234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (3m34.073711828s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (214.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m39.16960367s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0919 23:23:33.466721  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/functional-432755/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:34.790331  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/auto-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:43.185697  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:47.467626  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:23:47.756381  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.454758  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.461130  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.472486  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.493883  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.535345  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.616823  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:04.778779  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:05.100612  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:05.742746  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:07.024734  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m35.11197491s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-359569 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5b59928a-3af7-4037-882a-de2e0f43bd9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 23:24:14.709153  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [5b59928a-3af7-4037-882a-de2e0f43bd9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003765446s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-359569 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-359569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-359569 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-359569 --alsologtostderr -v=3
E0919 23:24:24.147057  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:24.950839  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:24:28.429036  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-359569 --alsologtostderr -v=3: (10.720410264s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359569 -n old-k8s-version-359569
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359569 -n old-k8s-version-359569: exit status 7 (75.863822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-359569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (83.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-359569 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m23.384472464s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-359569 -n old-k8s-version-359569
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (83.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-834234 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b7712c70-c243-4e03-9b68-a35ad411c365] Pending
helpers_test.go:352: "busybox" [b7712c70-c243-4e03-9b68-a35ad411c365] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b7712c70-c243-4e03-9b68-a35ad411c365] Running
E0919 23:24:45.432704  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003714378s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-834234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-834234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-834234 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-834234 --alsologtostderr -v=3
E0919 23:24:52.970067  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/calico-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-834234 --alsologtostderr -v=3: (10.716871971s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-834234 -n no-preload-834234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-834234 -n no-preload-834234: exit status 7 (70.735044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-834234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-834234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-834234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (57.500195142s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-834234 -n no-preload-834234
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-253767 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e2aa71a6-2c97-45b7-a86f-4cc0b1b1dba8] Pending
helpers_test.go:352: "busybox" [e2aa71a6-2c97-45b7-a86f-4cc0b1b1dba8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e2aa71a6-2c97-45b7-a86f-4cc0b1b1dba8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00411543s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-253767 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-485703 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [077a0d06-08ba-44d7-a1cb-2ddd4df9cdf1] Pending
helpers_test.go:352: "busybox" [077a0d06-08ba-44d7-a1cb-2ddd4df9cdf1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [077a0d06-08ba-44d7-a1cb-2ddd4df9cdf1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003935889s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-485703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-253767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-253767 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-253767 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-253767 --alsologtostderr -v=3: (10.962539739s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-485703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-485703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-485703 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-485703 --alsologtostderr -v=3: (10.837017364s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253767 -n embed-certs-253767
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253767 -n embed-certs-253767: exit status 7 (92.61744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-253767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (81.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0919 23:25:26.394400  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:27.673643  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/skaffold-851107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.106547  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.113925  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.125360  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.146801  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.188205  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.269729  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:29.431298  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-253767 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m21.193394951s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-253767 -n embed-certs-253767
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (81.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703
E0919 23:25:29.753309  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703: exit status 7 (66.203584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-485703 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (79.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0919 23:25:30.395200  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:31.677126  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:34.238690  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:39.360803  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:46.069221  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/custom-flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:49.603029  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:50.350925  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/false-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:50.926946  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/auto-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-485703 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m18.767783629s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (79.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nlr4d" [79f7fb7f-084e-49c8-89ec-4c532a4ccf19] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004327133s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g582h" [d653a8af-9c0e-4c05-be48-867f23f81a92] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g582h" [d653a8af-9c0e-4c05-be48-867f23f81a92] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00302506s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nlr4d" [79f7fb7f-084e-49c8-89ec-4c532a4ccf19] Running
E0919 23:26:03.895207  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kindnet-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003251174s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-359569 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-g582h" [d653a8af-9c0e-4c05-be48-867f23f81a92] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003435807s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-834234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-359569 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-834234 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-834234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-834234 -n no-preload-834234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-834234 -n no-preload-834234: exit status 2 (386.834607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-834234 -n no-preload-834234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-834234 -n no-preload-834234: exit status 2 (437.448229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-834234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-834234 -n no-preload-834234
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-834234 -n no-preload-834234: exit status 2 (458.37072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-834234 -n no-preload-834234
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-343622 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-343622 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (31.15097592s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-343622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-343622 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-343622 --alsologtostderr -v=3: (10.763948019s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hvbvx" [35859758-cdba-498d-b092-0ed29deaed12] Running
E0919 23:26:48.316275  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/enable-default-cni-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003758046s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nv7hz" [bec972d2-a76c-4b75-9978-61719c0fbce4] Running
E0919 23:26:51.046242  146335 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/flannel-361266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nv7hz" [bec972d2-a76c-4b75-9978-61719c0fbce4] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nv7hz" [bec972d2-a76c-4b75-9978-61719c0fbce4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002743233s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hvbvx" [35859758-cdba-498d-b092-0ed29deaed12] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004009387s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-253767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nv7hz" [bec972d2-a76c-4b75-9978-61719c0fbce4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00349426s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-485703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-343622 -n newest-cni-343622
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-343622 -n newest-cni-343622: exit status 7 (67.32022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-343622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-343622 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-343622 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (12.761667646s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-343622 -n newest-cni-343622
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-253767 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-253767 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253767 -n embed-certs-253767
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253767 -n embed-certs-253767: exit status 2 (288.820541ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-253767 -n embed-certs-253767
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-253767 -n embed-certs-253767: exit status 2 (292.022532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-253767 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-253767 -n embed-certs-253767
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-253767 -n embed-certs-253767
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-485703 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-485703 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703: exit status 2 (304.809254ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703: exit status 2 (370.486376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-485703 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-485703 -n default-k8s-diff-port-485703
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-343622 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-343622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-343622 -n newest-cni-343622
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-343622 -n newest-cni-343622: exit status 2 (289.947913ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-343622 -n newest-cni-343622
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-343622 -n newest-cni-343622: exit status 2 (294.497363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-343622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-343622 -n newest-cni-343622
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-343622 -n newest-cni-343622
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.23s)

                                                
                                    

Test skip (22/334)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-361266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-361266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-073186
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-142711/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:13:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-122916
contexts:
- context:
cluster: cert-expiration-073186
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:11:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-073186
name: cert-expiration-073186
- context:
cluster: kubernetes-upgrade-122916
user: kubernetes-upgrade-122916
name: kubernetes-upgrade-122916
current-context: ""
kind: Config
users:
- name: cert-expiration-073186
user:
client-certificate: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/cert-expiration-073186/client.crt
client-key: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/cert-expiration-073186/client.key
- name: kubernetes-upgrade-122916
user:
client-certificate: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubernetes-upgrade-122916/client.crt
client-key: /home/jenkins/minikube-integration/21594-142711/.minikube/profiles/kubernetes-upgrade-122916/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-361266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-361266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361266"

                                                
                                                
----------------------- debugLogs end: cilium-361266 [took: 5.121210333s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-361266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-361266
--- SKIP: TestNetworkPlugins/group/cilium (5.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-481061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-481061
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard